Feb 02 14:33:16 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 14:33:16 crc restorecon[4752]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:16 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:17 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 14:33:18 crc restorecon[4752]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 14:33:19 crc kubenswrapper[4869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 14:33:19 crc kubenswrapper[4869]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 14:33:19 crc kubenswrapper[4869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 14:33:19 crc kubenswrapper[4869]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 14:33:19 crc kubenswrapper[4869]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 14:33:19 crc kubenswrapper[4869]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.100188 4869 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111439 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111493 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111500 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111505 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111510 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111516 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111521 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111526 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111530 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111535 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111539 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111544 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111548 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111552 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111557 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111564 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111571 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111576 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111580 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111584 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111588 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111593 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111597 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111601 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111606 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111612 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111617 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111622 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111627 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111632 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111636 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111641 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111646 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111654 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111661 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111666 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111670 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111676 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111681 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111686 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111690 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111696 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111703 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111708 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111713 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111718 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111724 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111730 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111736 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111742 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111747 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111752 4869 feature_gate.go:330] unrecognized feature gate: Example Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111758 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111763 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111768 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111772 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111776 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111781 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111787 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111792 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111796 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111800 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111804 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111809 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111813 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111821 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111825 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111830 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111835 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111839 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.111844 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.111988 4869 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112002 4869 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112017 4869 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112027 4869 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112036 4869 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112042 4869 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112051 4869 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112058 4869 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112064 4869 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112070 4869 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112076 4869 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112087 4869 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112094 4869 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112100 4869 flags.go:64] FLAG: --cgroup-root="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112104 4869 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112110 4869 flags.go:64] FLAG: --client-ca-file="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112115 4869 flags.go:64] FLAG: --cloud-config="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112120 4869 flags.go:64] FLAG: --cloud-provider="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112126 4869 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112134 4869 flags.go:64] FLAG: --cluster-domain="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112139 4869 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112144 4869 flags.go:64] FLAG: --config-dir="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112149 4869 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112156 4869 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112164 4869 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112170 4869 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112176 4869 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112182 4869 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112188 4869 flags.go:64] FLAG: --contention-profiling="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112194 4869 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112199 4869 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112205 4869 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112210 4869 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112217 4869 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112222 4869 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112228 4869 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112234 4869 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112239 4869 flags.go:64] FLAG: --enable-server="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112245 4869 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112252 4869 flags.go:64] FLAG: --event-burst="100" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112257 4869 flags.go:64] FLAG: --event-qps="50" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112262 4869 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112267 4869 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112272 4869 flags.go:64] FLAG: --eviction-hard="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112279 4869 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112284 4869 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112289 4869 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112295 4869 flags.go:64] FLAG: --eviction-soft="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112300 4869 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112305 4869 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112311 4869 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112316 4869 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112321 4869 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112326 4869 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112332 4869 flags.go:64] FLAG: --feature-gates="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112339 4869 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112344 4869 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112349 4869 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112355 4869 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112360 4869 flags.go:64] FLAG: --healthz-port="10248" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112366 4869 flags.go:64] FLAG: --help="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112372 4869 flags.go:64] FLAG: --hostname-override="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112377 4869 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112382 4869 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112387 4869 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112393 4869 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112398 4869 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112403 4869 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112409 4869 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112414 4869 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112419 4869 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112424 4869 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112430 4869 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112435 4869 flags.go:64] FLAG: --kube-reserved="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112440 4869 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112445 4869 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112450 4869 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112455 4869 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112460 4869 flags.go:64] FLAG: --lock-file="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112465 4869 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112470 4869 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112476 4869 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112484 4869 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112494 4869 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112501 4869 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112506 4869 flags.go:64] FLAG: --logging-format="text" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112511 4869 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112517 4869 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112523 4869 flags.go:64] FLAG: --manifest-url="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112529 4869 flags.go:64] FLAG: --manifest-url-header="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112538 4869 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112544 4869 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112551 4869 flags.go:64] FLAG: --max-pods="110" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112557 4869 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112562 4869 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112568 4869 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112574 4869 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112579 4869 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112584 4869 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112591 4869 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112606 4869 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112612 4869 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112618 4869 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112623 4869 flags.go:64] FLAG: --pod-cidr="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112628 4869 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112639 4869 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112644 4869 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112650 4869 flags.go:64] FLAG: --pods-per-core="0" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112655 4869 flags.go:64] FLAG: --port="10250" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112660 4869 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112666 4869 flags.go:64] FLAG: --provider-id="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112671 4869 flags.go:64] FLAG: --qos-reserved="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112676 4869 flags.go:64] FLAG: --read-only-port="10255" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112682 4869 flags.go:64] FLAG: --register-node="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112687 4869 flags.go:64] FLAG: --register-schedulable="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112693 4869 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112704 4869 flags.go:64] FLAG: --registry-burst="10" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112710 4869 flags.go:64] FLAG: --registry-qps="5" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112716 4869 flags.go:64] FLAG: --reserved-cpus="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112723 4869 flags.go:64] FLAG: --reserved-memory="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112731 4869 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112738 4869 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112744 4869 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112749 4869 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112755 4869 flags.go:64] FLAG: --runonce="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112760 4869 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112765 4869 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112771 4869 flags.go:64] FLAG: --seccomp-default="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112776 4869 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112781 4869 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112787 4869 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112793 4869 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112798 4869 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112802 4869 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112807 4869 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112812 4869 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112817 4869 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112822 4869 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112827 4869 flags.go:64] FLAG: --system-cgroups="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112832 4869 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112842 4869 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112854 4869 flags.go:64] FLAG: --tls-cert-file="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112860 4869 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112870 4869 flags.go:64] FLAG: --tls-min-version="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112874 4869 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112879 4869 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112885 4869 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112890 4869 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112896 4869 flags.go:64] FLAG: --v="2" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112922 4869 flags.go:64] FLAG: --version="false" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112932 4869 flags.go:64] FLAG: --vmodule="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112940 4869 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.112945 4869 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113095 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113102 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113109 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113114 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113119 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113125 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113130 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113135 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113139 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113143 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113149 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113154 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113158 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113163 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113167 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113174 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113180 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113185 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113190 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113195 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113204 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113209 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113214 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113219 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113223 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113227 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113232 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113237 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113242 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113247 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113252 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113257 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113266 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113271 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113275 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113279 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113284 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113289 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113297 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113303 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113308 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113314 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113319 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113324 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113329 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113335 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113341 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113346 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113350 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113354 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113359 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113365 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113373 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113378 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113383 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113387 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113392 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113396 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113400 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113404 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113409 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113413 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113418 4869 feature_gate.go:330] unrecognized feature gate: Example Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113423 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113430 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113435 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113439 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113444 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113448 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113453 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.113457 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.113466 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.125295 4869 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.125333 4869 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125409 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125418 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125423 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125428 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125432 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125436 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125441 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125446 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125451 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125456 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125461 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125466 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125470 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125474 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125478 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125482 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125486 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125490 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125493 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125497 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125500 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125504 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125507 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125511 4869 feature_gate.go:330] unrecognized feature gate: Example Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125514 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125518 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125521 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125525 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125528 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125532 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125535 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125583 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125587 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125590 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125595 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125599 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125602 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125606 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125610 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125624 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125628 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125633 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125637 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125642 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125646 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125651 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125656 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125660 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125664 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125669 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125673 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125678 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125681 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125686 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125689 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125693 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125697 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125700 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125704 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125707 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125711 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125714 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125718 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125722 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125725 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125729 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125732 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125736 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125739 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125742 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125748 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.125755 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125878 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125885 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125891 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125896 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125902 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125934 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125938 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125942 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125946 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125950 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125954 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125957 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125960 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125964 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125968 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125971 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125975 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125978 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125982 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125986 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125989 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125993 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.125997 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126002 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126007 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126011 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126015 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126019 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126022 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126026 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126031 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126036 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126040 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126044 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126050 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126056 4869 feature_gate.go:330] unrecognized feature gate: Example Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126060 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126064 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126068 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126072 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126079 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126087 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126098 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126104 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126109 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126115 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126120 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126125 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126129 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126133 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126137 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126141 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126146 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126150 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126155 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126161 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126166 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126171 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126174 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126178 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126182 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126186 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126190 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126193 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126197 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126202 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126206 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126209 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126213 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126216 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.126221 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.126227 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.126449 4869 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.132688 4869 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.132808 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.134467 4869 server.go:997] "Starting client certificate rotation" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.134493 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.134632 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-02 14:03:40.733048488 +0000 UTC Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.134718 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.160152 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.162477 4869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.163068 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.214392 4869 log.go:25] "Validated CRI v1 runtime API" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.315762 4869 log.go:25] "Validated CRI v1 image API" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.318369 4869 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.328241 4869 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-14-28-17-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.328280 4869 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.344451 4869 manager.go:217] Machine: {Timestamp:2026-02-02 14:33:19.341385291 +0000 UTC m=+0.986022071 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0aa343f6-2c18-4e4e-b19b-25e42d92b529 BootID:1c099235-d602-4e51-9f67-7e55e0b34cd4 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7d:32:e1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7d:32:e1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:91:00:e8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:51:04:3e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ab:68:3b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:98:e6:22 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:73:76:27 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:03:49:81:76:b9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:a0:60:71:f6:6a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.344806 4869 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.345211 4869 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.345593 4869 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.345889 4869 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.346091 4869 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.346349 4869 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.346364 4869 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.349337 4869 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.349384 4869 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.349664 4869 state_mem.go:36] "Initialized new in-memory state store" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.351260 4869 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.361996 4869 kubelet.go:418] "Attempting to sync node with API server" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.362020 4869 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.362045 4869 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.362061 4869 kubelet.go:324] "Adding apiserver pod source" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.362081 4869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.372324 4869 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.373559 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.377108 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.377285 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.377155 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.377342 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.380182 4869 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382236 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382274 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382284 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382293 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382315 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382323 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382332 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382346 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382356 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382366 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382391 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.382400 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.385882 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.387165 4869 server.go:1280] "Started kubelet" Feb 02 14:33:19 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.389209 4869 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.391008 4869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.392048 4869 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.392225 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.402402 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.402485 4869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.402606 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:03:54.668633995 +0000 UTC Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.402929 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.403025 4869 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.403063 4869 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.403134 4869 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.403536 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="200ms" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.402495 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.82:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890748c46b51a59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 14:33:19.387122265 +0000 UTC m=+1.031759045,LastTimestamp:2026-02-02 14:33:19.387122265 +0000 UTC m=+1.031759045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.404463 4869 server.go:460] "Adding debug handlers to kubelet server" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.405415 4869 factory.go:55] Registering systemd factory Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.405479 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.405576 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.406212 4869 factory.go:221] Registration of the systemd container factory successfully Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.406704 4869 factory.go:153] Registering CRI-O factory Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.406851 4869 factory.go:221] Registration of the crio container factory successfully Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.407070 4869 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.407202 4869 factory.go:103] Registering Raw factory Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.407333 4869 manager.go:1196] Started watching for new ooms in manager Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.408260 4869 manager.go:319] Starting recovery of all containers Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410308 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410746 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410775 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410791 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410804 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410817 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410836 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410851 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410868 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410887 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410900 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410941 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.410988 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411004 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411018 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411038 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411049 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411059 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411071 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411083 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411094 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411141 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411150 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411164 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411176 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411188 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411206 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411216 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411228 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411239 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411249 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411260 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411270 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411282 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411291 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411303 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411362 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411384 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411401 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411415 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411430 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411446 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411468 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411485 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411498 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411512 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411579 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411611 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411656 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411667 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411676 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411686 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411717 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411727 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411737 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411763 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411774 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411783 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411791 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411838 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411853 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411866 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411904 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.411963 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414264 4869 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414314 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414330 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414342 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414354 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414394 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414404 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414423 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414432 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414443 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414455 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414482 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414492 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414519 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414533 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414562 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414576 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414589 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414600 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414612 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414625 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414653 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414686 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414723 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414732 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414743 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414754 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414764 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414773 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414798 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414808 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414849 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414863 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414875 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414885 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414942 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414957 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414982 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.414996 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415018 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415030 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415088 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415100 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415120 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415131 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415151 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415161 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415194 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415233 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415243 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415251 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415312 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415325 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415335 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415346 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415377 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415388 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415400 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415413 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415469 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415504 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415526 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415536 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415557 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415566 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415574 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415584 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415615 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415626 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415647 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415657 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415687 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415698 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415711 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415723 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415755 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415767 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415801 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415814 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415899 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415934 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415946 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415958 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415969 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415983 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.415993 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416003 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416035 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416081 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416111 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416123 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416134 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416147 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416190 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416203 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416221 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416234 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416247 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416257 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416267 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416278 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416304 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416314 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416324 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416334 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416344 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416353 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416376 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416384 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416404 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416412 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416423 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416433 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416442 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416452 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416464 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416478 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416492 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416503 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416514 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416530 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416551 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416566 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416576 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416590 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416606 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416619 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416631 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416641 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416655 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416668 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416681 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416693 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416705 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416715 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416726 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416736 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416747 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416756 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416767 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416777 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416787 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416796 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416807 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416816 4869 reconstruct.go:97] "Volume reconstruction finished" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.416825 4869 reconciler.go:26] "Reconciler: start to sync state" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.435951 4869 manager.go:324] Recovery completed Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.448336 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.450425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.450574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.450645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.451887 4869 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.451994 4869 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.452121 4869 state_mem.go:36] "Initialized new in-memory state store" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.459201 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.461289 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.461350 4869 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.461386 4869 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.461437 4869 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 14:33:19 crc kubenswrapper[4869]: W0202 14:33:19.462187 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.462291 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.477327 4869 policy_none.go:49] "None policy: Start" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.479144 4869 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.479211 4869 state_mem.go:35] "Initializing new in-memory state store" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.504044 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.540418 4869 manager.go:334] "Starting Device Plugin manager" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.540720 4869 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.540739 4869 server.go:79] "Starting device plugin registration server" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.541352 4869 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.541380 4869 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.541571 4869 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.541759 4869 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.541780 4869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.548430 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.561729 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.561890 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.563159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.563203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.563215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.563379 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.563592 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.563640 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.564197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.564221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.564231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.564299 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.564552 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.564636 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.565004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.565029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.565038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567066 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567400 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567639 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.567705 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.568407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.568438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.568448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.568555 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.568639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.568663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.568691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.569088 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.569175 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.569586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.569618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.569650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.569982 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.570024 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.571892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.571952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.571966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.571932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.572060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.572075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.604184 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="400ms" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.619620 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.619688 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.619730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.619757 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.619802 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.619872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.619940 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.620104 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.620201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.620288 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.620328 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.620402 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.620443 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.620482 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.620525 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.642261 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.644035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.644112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.644134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.644172 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.644963 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.82:6443: connect: connection refused" node="crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723825 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723870 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723897 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723956 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723975 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.723997 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724036 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724093 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724179 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724184 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724350 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724229 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724428 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724438 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724442 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724210 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724160 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.724547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.845971 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.847929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.848011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.848040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.848084 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 14:33:19 crc kubenswrapper[4869]: E0202 14:33:19.848792 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.82:6443: connect: connection refused" node="crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.886629 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.899797 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.909295 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.933419 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:19 crc kubenswrapper[4869]: I0202 14:33:19.938092 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:20 crc kubenswrapper[4869]: E0202 14:33:20.005531 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="800ms" Feb 02 14:33:20 crc kubenswrapper[4869]: W0202 14:33:20.048819 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b4a7efc6d1da75e5f3dc1a88730740270d7e2dd1b9a43546d381bdba2e4c31f1 WatchSource:0}: Error finding container b4a7efc6d1da75e5f3dc1a88730740270d7e2dd1b9a43546d381bdba2e4c31f1: Status 404 returned error can't find the container with id b4a7efc6d1da75e5f3dc1a88730740270d7e2dd1b9a43546d381bdba2e4c31f1 Feb 02 14:33:20 crc kubenswrapper[4869]: W0202 14:33:20.051045 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cfa72f79dabb52e799fe1b2af151ed7242d9948dd2aab3f270e38d6c440f1289 WatchSource:0}: Error finding container cfa72f79dabb52e799fe1b2af151ed7242d9948dd2aab3f270e38d6c440f1289: Status 404 returned error can't find the container with id cfa72f79dabb52e799fe1b2af151ed7242d9948dd2aab3f270e38d6c440f1289 Feb 02 14:33:20 crc kubenswrapper[4869]: W0202 14:33:20.061669 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c49a5d8ff5f9465a3ccd1e276faf8569969243664955313c6fc334f39658f242 WatchSource:0}: Error finding container c49a5d8ff5f9465a3ccd1e276faf8569969243664955313c6fc334f39658f242: Status 404 returned error can't find the container with id c49a5d8ff5f9465a3ccd1e276faf8569969243664955313c6fc334f39658f242 Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.249810 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.251408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.251443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.251452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.251480 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 14:33:20 crc kubenswrapper[4869]: E0202 14:33:20.252208 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.82:6443: connect: connection refused" node="crc" Feb 02 14:33:20 crc kubenswrapper[4869]: W0202 14:33:20.269001 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:20 crc kubenswrapper[4869]: E0202 14:33:20.269163 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:20 crc kubenswrapper[4869]: W0202 14:33:20.359411 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:20 crc kubenswrapper[4869]: E0202 14:33:20.359528 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.394200 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.403181 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:26:04.599470715 +0000 UTC Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.467846 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cfa72f79dabb52e799fe1b2af151ed7242d9948dd2aab3f270e38d6c440f1289"} Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.469394 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b4a7efc6d1da75e5f3dc1a88730740270d7e2dd1b9a43546d381bdba2e4c31f1"} Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.470726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c49a5d8ff5f9465a3ccd1e276faf8569969243664955313c6fc334f39658f242"} Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.471820 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6d909dcb766c2755083c94680b14a22170bbc1d9e5bd6f4d537c7d569fb38e38"} Feb 02 14:33:20 crc kubenswrapper[4869]: I0202 14:33:20.473624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"baf38cce62b26cb7328b563ce481efb5935cfdd3a8734fe8ab05d4446d7f36dd"} Feb 02 14:33:20 crc kubenswrapper[4869]: W0202 14:33:20.548561 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:20 crc kubenswrapper[4869]: E0202 14:33:20.548685 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:20 crc kubenswrapper[4869]: E0202 14:33:20.807068 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="1.6s" Feb 02 14:33:21 crc kubenswrapper[4869]: W0202 14:33:21.003960 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:21 crc kubenswrapper[4869]: E0202 14:33:21.004061 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.053146 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.055762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.055812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.055825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.055864 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 14:33:21 crc kubenswrapper[4869]: E0202 14:33:21.056432 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.82:6443: connect: connection refused" node="crc" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.326102 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 14:33:21 crc kubenswrapper[4869]: E0202 14:33:21.328833 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.393986 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.403348 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:17:15.641347561 +0000 UTC Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.479057 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d" exitCode=0 Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.479175 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d"} Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.479415 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.480678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.480734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.480747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.482687 4869 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f" exitCode=0 Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.482748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f"} Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.482797 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.484037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.484100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.484117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.486138 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37" exitCode=0 Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.486234 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37"} Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.486307 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.487401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.487444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.487457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.488985 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53"} Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.491295 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8" exitCode=0 Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.491349 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8"} Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.491433 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.493098 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.493198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.493235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.493248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.494273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.494298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:21 crc kubenswrapper[4869]: I0202 14:33:21.494311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:22 crc kubenswrapper[4869]: W0202 14:33:22.015394 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:22 crc kubenswrapper[4869]: E0202 14:33:22.015478 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:22 crc kubenswrapper[4869]: W0202 14:33:22.164415 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:22 crc kubenswrapper[4869]: E0202 14:33:22.164545 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.393218 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.403470 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:46:03.528788928 +0000 UTC Feb 02 14:33:22 crc kubenswrapper[4869]: E0202 14:33:22.408809 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="3.2s" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.500154 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a" exitCode=0 Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.500311 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.500298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a"} Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.501369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.501426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.501444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.504972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf"} Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.508680 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5"} Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.510506 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3"} Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.512440 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"798c064c352528e1cb858b56d46099dd05d6159b41279b5318a1b9541ee967f2"} Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.512565 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.514131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.514186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.514208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.657609 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.659190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.659229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.659241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:22 crc kubenswrapper[4869]: I0202 14:33:22.659270 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 14:33:22 crc kubenswrapper[4869]: E0202 14:33:22.660020 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.82:6443: connect: connection refused" node="crc" Feb 02 14:33:22 crc kubenswrapper[4869]: W0202 14:33:22.969167 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:22 crc kubenswrapper[4869]: E0202 14:33:22.969314 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.393866 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.403958 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:39:09.626003992 +0000 UTC Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.519386 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213"} Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.519448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f"} Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.519462 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649"} Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.522837 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06"} Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.522870 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.522948 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a"} Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.524075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.524112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.524124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.527028 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c" exitCode=0 Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.527183 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.527640 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c"} Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.528229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.528270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.528288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.532371 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b"} Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.532395 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.532414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab"} Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.532446 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.534072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.534127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.534142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.534137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.534173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:23 crc kubenswrapper[4869]: I0202 14:33:23.534183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:23 crc kubenswrapper[4869]: W0202 14:33:23.632652 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:23 crc kubenswrapper[4869]: E0202 14:33:23.632754 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.393262 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.404677 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:57:03.784633552 +0000 UTC Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.497894 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.542760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"57dbf7eafb53bffd2a0863b3d1677a65d782cafe67265bea4d1e8803a5547224"} Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.542824 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4088257c658a87ac1ae8eaf8b8b2f731f335d37e83598159143d2d4b19eaa14c"} Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.545890 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.546614 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.546664 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b73d1954bb7b6bacb4bceeda2fa08b622e61fefa7ca5e1b20c18ea7ac4197275"} Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.546751 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.546776 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.546974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.547010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.547022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.547479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.547514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.547530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.547577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.547599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.547610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:24 crc kubenswrapper[4869]: I0202 14:33:24.603687 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:25 crc kubenswrapper[4869]: W0202 14:33:25.300854 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:33:25 crc kubenswrapper[4869]: E0202 14:33:25.301538 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.342636 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 14:33:25 crc kubenswrapper[4869]: E0202 14:33:25.344062 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.405491 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:47:37.876437179 +0000 UTC Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.553803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"554ab58cbf793e782c21583536d2fc9bc092ae81ce121bcb185521e526e0cdf4"} Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.553856 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"32e82a3c47da2576ab596a5cf57e45e6c1ae7f3279945b039297fc25ffbf44fb"} Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.553878 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"80d970fc73d9516f6d1eb7b1e27f9202e0b7236c6efd95c18bc8478b3e50b1f8"} Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.553988 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.555578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.555622 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.555628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.555660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.557587 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b73d1954bb7b6bacb4bceeda2fa08b622e61fefa7ca5e1b20c18ea7ac4197275" exitCode=255 Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.557647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b73d1954bb7b6bacb4bceeda2fa08b622e61fefa7ca5e1b20c18ea7ac4197275"} Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.557728 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.557733 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.558969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.558989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.559013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.559017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.559035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.559046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.559879 4869 scope.go:117] "RemoveContainer" containerID="b73d1954bb7b6bacb4bceeda2fa08b622e61fefa7ca5e1b20c18ea7ac4197275" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.860656 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.862294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.862365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.862379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:25 crc kubenswrapper[4869]: I0202 14:33:25.862430 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.127713 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.180148 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.191699 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.405711 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:43:06.406501532 +0000 UTC Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.563616 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.567255 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e"} Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.567341 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.567388 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.567474 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.567561 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.568855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.568887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.568899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.568866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.568962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.568976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.569000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.568986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:26 crc kubenswrapper[4869]: I0202 14:33:26.569152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.406821 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:46:51.343102044 +0000 UTC Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.568367 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.571116 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.571140 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.571340 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.573434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.573471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.573523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.573541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.573495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.573626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.606114 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.606787 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.608897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.608956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.608966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.720489 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.789132 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.789495 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.791324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.791385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:27 crc kubenswrapper[4869]: I0202 14:33:27.791410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.407603 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:55:45.864052604 +0000 UTC Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.573582 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.573946 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.575093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.575192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.575317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.575621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.575677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:28 crc kubenswrapper[4869]: I0202 14:33:28.575692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.408638 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:03:45.4082437 +0000 UTC Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.507294 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.507591 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.509552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.509614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.509631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:29 crc kubenswrapper[4869]: E0202 14:33:29.548588 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.576889 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.578447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.578495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:29 crc kubenswrapper[4869]: I0202 14:33:29.578506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:30 crc kubenswrapper[4869]: I0202 14:33:30.410562 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:29:59.447042338 +0000 UTC Feb 02 14:33:30 crc kubenswrapper[4869]: I0202 14:33:30.568607 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 14:33:30 crc kubenswrapper[4869]: I0202 14:33:30.568960 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:33:31 crc kubenswrapper[4869]: I0202 14:33:31.411642 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:02:28.081872455 +0000 UTC Feb 02 14:33:32 crc kubenswrapper[4869]: I0202 14:33:32.412212 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:59:21.394059252 +0000 UTC Feb 02 14:33:33 crc kubenswrapper[4869]: I0202 14:33:33.413020 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:00:06.569115311 +0000 UTC Feb 02 14:33:33 crc kubenswrapper[4869]: I0202 14:33:33.608397 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 14:33:34 crc kubenswrapper[4869]: I0202 14:33:34.413448 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:46:37.439484995 +0000 UTC Feb 02 14:33:35 crc kubenswrapper[4869]: I0202 14:33:35.394543 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 14:33:35 crc kubenswrapper[4869]: I0202 14:33:35.413695 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:22:49.791446616 +0000 UTC Feb 02 14:33:35 crc kubenswrapper[4869]: E0202 14:33:35.610963 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 02 14:33:35 crc kubenswrapper[4869]: E0202 14:33:35.864149 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Feb 02 14:33:36 crc kubenswrapper[4869]: I0202 14:33:36.136341 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:36 crc kubenswrapper[4869]: I0202 14:33:36.136550 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:36 crc kubenswrapper[4869]: I0202 14:33:36.138017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:36 crc kubenswrapper[4869]: I0202 14:33:36.138052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:36 crc kubenswrapper[4869]: I0202 14:33:36.138065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:36 crc kubenswrapper[4869]: I0202 14:33:36.415365 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:53:47.189643508 +0000 UTC Feb 02 14:33:36 crc kubenswrapper[4869]: E0202 14:33:36.512020 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.1890748c46b51a59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 14:33:19.387122265 +0000 UTC m=+1.031759045,LastTimestamp:2026-02-02 14:33:19.387122265 +0000 UTC m=+1.031759045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.099446 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.099729 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.106042 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.106148 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.416333 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:13:15.089115866 +0000 UTC Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.639185 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.639423 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.641053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.641121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.641145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.655855 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.727788 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]log ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]etcd ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/priority-and-fairness-filter ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-apiextensions-informers ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-apiextensions-controllers ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/crd-informer-synced ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-system-namespaces-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 02 14:33:37 crc kubenswrapper[4869]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 02 14:33:37 crc kubenswrapper[4869]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/bootstrap-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/start-kube-aggregator-informers ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/apiservice-registration-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/apiservice-discovery-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]autoregister-completion ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/apiservice-openapi-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 02 14:33:37 crc kubenswrapper[4869]: livez check failed Feb 02 14:33:37 crc kubenswrapper[4869]: I0202 14:33:37.727867 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:33:38 crc kubenswrapper[4869]: I0202 14:33:38.417584 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:39:05.219018219 +0000 UTC Feb 02 14:33:38 crc kubenswrapper[4869]: I0202 14:33:38.604247 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:38 crc kubenswrapper[4869]: I0202 14:33:38.606654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:38 crc kubenswrapper[4869]: I0202 14:33:38.606709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:38 crc kubenswrapper[4869]: I0202 14:33:38.606723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:39 crc kubenswrapper[4869]: I0202 14:33:39.418238 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:26:35.86959378 +0000 UTC Feb 02 14:33:39 crc kubenswrapper[4869]: E0202 14:33:39.548942 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 14:33:40 crc kubenswrapper[4869]: I0202 14:33:40.419470 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:13:41.715788858 +0000 UTC Feb 02 14:33:40 crc kubenswrapper[4869]: I0202 14:33:40.569391 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 14:33:40 crc kubenswrapper[4869]: I0202 14:33:40.569497 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 14:33:41 crc kubenswrapper[4869]: I0202 14:33:41.420422 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:53:51.988495462 +0000 UTC Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.085513 4869 trace.go:236] Trace[1717933797]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 14:33:27.331) (total time: 14753ms): Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[1717933797]: ---"Objects listed" error: 14753ms (14:33:42.085) Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[1717933797]: [14.753556027s] [14.753556027s] END Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.085550 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.086165 4869 trace.go:236] Trace[1641039331]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 14:33:31.951) (total time: 10134ms): Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[1641039331]: ---"Objects listed" error: 10134ms (14:33:42.086) Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[1641039331]: [10.134475821s] [10.134475821s] END Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.086200 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.086589 4869 trace.go:236] Trace[314944033]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 14:33:28.267) (total time: 13818ms): Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[314944033]: ---"Objects listed" error: 13818ms (14:33:42.086) Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[314944033]: [13.818823153s] [13.818823153s] END Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.086610 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.089518 4869 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.090513 4869 trace.go:236] Trace[514746400]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 14:33:27.996) (total time: 14094ms): Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[514746400]: ---"Objects listed" error: 14094ms (14:33:42.090) Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[514746400]: [14.094105574s] [14.094105574s] END Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.090536 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.099372 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.157959 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.157987 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.158039 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.158132 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.264309 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.266053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.266120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.266139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.266243 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.284168 4869 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.285196 4869 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286819 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.299044 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.314037 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.320538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.320845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.320939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.321139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.321216 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.347671 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352303 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.368065 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373765 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.385559 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.385709 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.399537 4869 apiserver.go:52] "Watching apiserver" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.405209 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.405706 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.406221 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.406319 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.406410 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.406875 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.406884 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.406970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.407056 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.407098 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.407147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.408802 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.408965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.410633 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.410833 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.411865 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.412232 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.412306 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.412392 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.412430 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.420887 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:39:17.508903278 +0000 UTC Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.478313 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490932 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.494519 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.504692 4869 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.512053 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.515444 4869 csr.go:261] certificate signing request csr-mmbhx is approved, waiting to be issued Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.532344 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.536449 4869 csr.go:257] certificate signing request csr-mmbhx is issued Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.550592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.562827 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.574856 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.594000 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598166 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598223 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598326 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598369 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598390 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598410 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598445 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598464 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598506 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598524 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598571 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598679 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598715 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598736 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598757 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598776 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598793 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598810 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598831 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598849 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598866 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598886 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598926 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598948 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598981 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599003 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599025 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599045 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599066 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599086 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599105 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599123 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599163 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599186 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599210 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599293 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599329 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599370 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599387 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599411 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599433 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.594374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599550 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.596945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600868 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599409 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599473 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599679 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599835 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599891 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599930 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600270 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600575 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600766 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601034 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601110 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601181 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601217 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601259 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601314 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601328 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601355 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601399 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601423 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601449 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601471 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601521 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601540 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601558 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601576 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601628 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601664 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601788 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601810 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601928 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601953 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601973 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601996 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602017 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602037 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602054 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602132 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602154 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602202 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602213 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602251 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602314 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602357 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602380 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602402 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602442 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602460 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602470 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602484 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602509 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602531 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602570 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602584 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602588 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602624 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602644 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602665 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602683 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602703 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602720 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602745 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602792 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602823 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602842 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602878 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602897 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602931 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602967 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602996 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603050 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603070 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603145 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603152 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603205 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603292 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603312 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603328 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603345 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603362 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603446 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603461 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603479 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603496 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603515 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603531 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603601 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603618 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603635 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603654 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603689 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603708 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603726 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603746 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603763 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603805 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603823 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603840 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603857 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603873 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603890 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603943 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603959 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603977 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603994 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604014 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604033 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604051 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604069 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604086 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604102 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604119 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604138 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604156 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604174 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604192 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604210 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604243 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604261 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604279 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604298 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604547 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604613 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604661 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604811 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604832 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604971 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604984 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604995 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605005 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605014 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605024 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605035 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605045 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605054 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605064 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605074 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605083 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605093 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605102 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605111 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605121 4869 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605130 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605140 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605150 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605159 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605168 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605177 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605188 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605198 4869 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605209 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605220 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605230 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605240 4869 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605250 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605260 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603597 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.630239 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.631009 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.631029 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604021 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604625 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605060 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605293 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605585 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605703 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605779 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.606296 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.606527 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.606763 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607017 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607257 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607542 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607895 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.608050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.608119 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.609160 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.609256 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.610140 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.631668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.631973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632361 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632871 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633322 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633768 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633869 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634154 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633275 4869 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634430 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.611427 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.611816 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.612103 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.612497 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614226 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614456 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614663 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.615166 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.622469 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.622638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.622789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.622779 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623069 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623269 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623469 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623711 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.624015 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.624032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.624982 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.625229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.625954 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.626330 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.626697 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.629142 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635099 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635132 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635350 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635417 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635423 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635624 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.636062 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.636487 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.638146 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.131511536 +0000 UTC m=+24.776148306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.640827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.640841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.640855 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641106 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.641120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641133 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641148 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641218 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.141199014 +0000 UTC m=+24.785835784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641279 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641289 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641298 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641326 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.141319847 +0000 UTC m=+24.785956607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.641635 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.641952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642337 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642338 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642723 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.643018 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.643129 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.143089442 +0000 UTC m=+24.787726282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.647027 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.647232 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.648612 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.648823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.648845 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649323 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649398 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649704 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649741 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.610450 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.650162 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.650452 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.650646 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.650900 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.629141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.643185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.644329 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.651245 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.651616 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.651698 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.652251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.652550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.652772 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653538 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653683 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653747 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.656621 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.156581526 +0000 UTC m=+24.801218296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.643346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.656667 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.652604 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661573 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661839 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.662048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.664478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.664477 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.664690 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.665401 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.666673 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667395 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667410 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667564 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667714 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667994 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.668427 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.669117 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.671724 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.672116 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.673664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.673802 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.675054 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.675220 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.675559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.676717 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.679628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.680479 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.680951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.681606 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.681961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.682356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.683768 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.684055 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.686995 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" exitCode=255 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.687081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.687159 4869 scope.go:117] "RemoveContainer" containerID="b73d1954bb7b6bacb4bceeda2fa08b622e61fefa7ca5e1b20c18ea7ac4197275" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.687317 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.694665 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697071 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697152 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697287 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697418 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697671 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.704755 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.705622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.708796 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709789 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709806 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709821 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709833 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709844 4869 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709856 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709869 4869 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709882 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709894 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709955 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709973 4869 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709986 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709996 4869 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710007 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710018 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710029 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710040 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710051 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710065 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710076 4869 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710090 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710135 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710150 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710163 4869 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710175 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710186 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710197 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710210 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710221 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710235 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710247 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710259 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710272 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710282 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710293 4869 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710305 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710316 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710369 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710380 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710391 4869 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710403 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710414 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710426 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710440 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710455 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710466 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710478 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710490 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710504 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710515 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710526 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710538 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710548 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710559 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710570 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710580 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710591 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710602 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710612 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710622 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710632 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710644 4869 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710658 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710668 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710681 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710691 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710702 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710713 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710723 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710735 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710747 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710759 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710770 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710782 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710798 4869 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710811 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.712458 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.712690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715888 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716608 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716659 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716680 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716705 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716717 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716732 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716746 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716762 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716774 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716788 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716805 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716818 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716833 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716845 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716857 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716869 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716880 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716894 4869 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716922 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716934 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716946 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716958 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716968 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716987 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717000 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717013 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717023 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717035 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717046 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717059 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717071 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717083 4869 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717096 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717109 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717123 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717135 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717149 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717164 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717177 4869 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717191 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717204 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717216 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717230 4869 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717244 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717256 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717269 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717281 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717293 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717305 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717317 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717330 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717341 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717353 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717365 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717376 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717388 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717400 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717412 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717424 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717438 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717452 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717464 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717476 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717489 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717501 4869 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717512 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717525 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717537 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717553 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717566 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717579 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717592 4869 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717604 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717615 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717628 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717639 4869 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717649 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717661 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717672 4869 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717686 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717699 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717713 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717726 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717738 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717749 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717760 4869 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.720499 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.725684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.726127 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.727282 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.729541 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.733323 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.737787 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.741073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: W0202 14:33:42.741836 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-5b1584607f2339f1908a38e831e96997563182abb2b966f51857b3f34547750d WatchSource:0}: Error finding container 5b1584607f2339f1908a38e831e96997563182abb2b966f51857b3f34547750d: Status 404 returned error can't find the container with id 5b1584607f2339f1908a38e831e96997563182abb2b966f51857b3f34547750d Feb 02 14:33:42 crc kubenswrapper[4869]: W0202 14:33:42.745185 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-249f9008ba910ec308b66aadf3a6b05f165f582beef63184dcb57c683e2a6389 WatchSource:0}: Error finding container 249f9008ba910ec308b66aadf3a6b05f165f582beef63184dcb57c683e2a6389: Status 404 returned error can't find the container with id 249f9008ba910ec308b66aadf3a6b05f165f582beef63184dcb57c683e2a6389 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.751198 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: W0202 14:33:42.758124 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7ed5825baeee6b42bb9774be17f116804605bd3799177814b1c7fd9c68f72b11 WatchSource:0}: Error finding container 7ed5825baeee6b42bb9774be17f116804605bd3799177814b1c7fd9c68f72b11: Status 404 returned error can't find the container with id 7ed5825baeee6b42bb9774be17f116804605bd3799177814b1c7fd9c68f72b11 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.763658 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.781116 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.795573 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.826011 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.826056 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.826789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.826835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.827017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.827043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.827064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.835438 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.866324 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.896322 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.896592 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.900136 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.912050 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930562 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930840 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.945273 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.963318 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.136833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.137227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.137300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.137473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.137578 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.229771 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230033 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.229982275 +0000 UTC m=+25.874619055 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.230310 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.230440 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230533 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230618 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.230607951 +0000 UTC m=+25.875244721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.230550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230628 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230796 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230837 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.230786 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230952 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.230927299 +0000 UTC m=+25.875564079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230989 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231018 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231033 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231132 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.231104604 +0000 UTC m=+25.875741534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231252 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231387 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.23137562 +0000 UTC m=+25.876012550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.240817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.241160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.241284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.241383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.241477 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.421048 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:16:07.179248412 +0000 UTC Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447901 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.462241 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.462409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.466824 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.467519 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.469343 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.470180 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.471488 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.472159 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.472956 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.474229 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.475057 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.476238 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.476865 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.478331 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.479038 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.479713 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.480890 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.481692 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.483041 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.483610 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.484399 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.486048 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.486644 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.488118 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.488717 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.490448 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.491460 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.492544 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.493394 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.495263 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.496598 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.498123 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.498751 4869 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.498897 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.501782 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.502564 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.503152 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.505388 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.506878 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.507669 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.509235 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.510181 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.511441 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.512298 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.513685 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.515191 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.515879 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.517146 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.517834 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.519586 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.520263 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.520900 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.522099 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.522808 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.524971 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.525617 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.538175 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 14:28:42 +0000 UTC, rotation deadline is 2026-11-16 15:10:47.10648997 +0000 UTC Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.538241 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6888h37m3.568252016s for next certificate rotation Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550691 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653952 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.691660 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7ed5825baeee6b42bb9774be17f116804605bd3799177814b1c7fd9c68f72b11"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.693519 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.693556 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.693752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"249f9008ba910ec308b66aadf3a6b05f165f582beef63184dcb57c683e2a6389"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.695007 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.695043 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5b1584607f2339f1908a38e831e96997563182abb2b966f51857b3f34547750d"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.696450 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.703600 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.706363 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.706662 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.706897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.713480 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.729265 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.743113 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756627 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.759138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.776999 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.794537 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.814516 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73d1954bb7b6bacb4bceeda2fa08b622e61fefa7ca5e1b20c18ea7ac4197275\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:24Z\\\",\\\"message\\\":\\\"W0202 14:33:23.822540 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 14:33:23.822872 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770042803 cert, and key in /tmp/serving-cert-4014544013/serving-signer.crt, /tmp/serving-cert-4014544013/serving-signer.key\\\\nI0202 14:33:24.401431 1 observer_polling.go:159] Starting file observer\\\\nW0202 14:33:24.405042 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 14:33:24.405279 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 14:33:24.405989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4014544013/tls.crt::/tmp/serving-cert-4014544013/tls.key\\\\\\\"\\\\nF0202 14:33:24.945153 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.852089 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859195 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.877961 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.899589 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.919203 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.943502 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.959716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961864 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.976489 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.007661 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7tlsl"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.008180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.009660 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dql2j"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.009992 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-d9vfd"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010080 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010177 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-862tl"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010793 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010995 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.011188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.011391 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.011997 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.013992 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014004 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014137 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014217 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014358 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014405 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014504 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014545 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014592 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.015316 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.016156 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.032268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036209 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-socket-dir-parent\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036276 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdcm\" (UniqueName: \"kubernetes.io/projected/a649255d-23ef-4070-9acc-2adb7d94bc21-kube-api-access-5wdcm\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036309 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cni-binary-copy\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036384 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-daemon-config\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036438 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcz5j\" (UniqueName: \"kubernetes.io/projected/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-kube-api-access-qcz5j\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-netns\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-bin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qr7b\" (UniqueName: \"kubernetes.io/projected/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-kube-api-access-9qr7b\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036574 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036622 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-system-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-system-cni-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17c822d-8d51-42d0-9cae-7b607f9af79a-hosts-file\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036829 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036897 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cnibin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-multus\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036978 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-os-release\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-etc-kubernetes\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037017 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a649255d-23ef-4070-9acc-2adb7d94bc21-rootfs\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037035 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-k8s-cni-cncf-io\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-kubelet\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037077 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-multus-certs\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037101 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-conf-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037120 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a649255d-23ef-4070-9acc-2adb7d94bc21-proxy-tls\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a649255d-23ef-4070-9acc-2adb7d94bc21-mcd-auth-proxy-config\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037164 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkw2\" (UniqueName: \"kubernetes.io/projected/c17c822d-8d51-42d0-9cae-7b607f9af79a-kube-api-access-jvkw2\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cnibin\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037200 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-os-release\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-hostroot\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.052172 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064493 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.071287 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.084703 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.099569 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.116900 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.133442 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcz5j\" (UniqueName: \"kubernetes.io/projected/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-kube-api-access-qcz5j\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-netns\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138221 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-bin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qr7b\" (UniqueName: \"kubernetes.io/projected/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-kube-api-access-9qr7b\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-system-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-netns\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138366 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-bin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-system-cni-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138432 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17c822d-8d51-42d0-9cae-7b607f9af79a-hosts-file\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138485 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cnibin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-multus\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-os-release\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-etc-kubernetes\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138597 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a649255d-23ef-4070-9acc-2adb7d94bc21-rootfs\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-system-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138623 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-k8s-cni-cncf-io\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-multus\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138654 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-etc-kubernetes\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138654 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-kubelet\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138703 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cnibin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138709 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-multus-certs\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-kubelet\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-conf-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a649255d-23ef-4070-9acc-2adb7d94bc21-rootfs\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138777 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138850 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-conf-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138858 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-k8s-cni-cncf-io\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a649255d-23ef-4070-9acc-2adb7d94bc21-proxy-tls\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138983 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a649255d-23ef-4070-9acc-2adb7d94bc21-mcd-auth-proxy-config\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17c822d-8d51-42d0-9cae-7b607f9af79a-hosts-file\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139015 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvkw2\" (UniqueName: \"kubernetes.io/projected/c17c822d-8d51-42d0-9cae-7b607f9af79a-kube-api-access-jvkw2\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138892 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-os-release\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-system-cni-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cnibin\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139074 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cnibin\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-multus-certs\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139095 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139197 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139205 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-os-release\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139254 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-os-release\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139276 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-hostroot\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-hostroot\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-socket-dir-parent\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdcm\" (UniqueName: \"kubernetes.io/projected/a649255d-23ef-4070-9acc-2adb7d94bc21-kube-api-access-5wdcm\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139450 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cni-binary-copy\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139471 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-socket-dir-parent\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139489 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-daemon-config\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139884 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a649255d-23ef-4070-9acc-2adb7d94bc21-mcd-auth-proxy-config\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.140144 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.140354 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-daemon-config\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.140660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cni-binary-copy\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.144952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a649255d-23ef-4070-9acc-2adb7d94bc21-proxy-tls\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.151535 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.159717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdcm\" (UniqueName: \"kubernetes.io/projected/a649255d-23ef-4070-9acc-2adb7d94bc21-kube-api-access-5wdcm\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.161677 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcz5j\" (UniqueName: \"kubernetes.io/projected/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-kube-api-access-qcz5j\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.162514 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvkw2\" (UniqueName: \"kubernetes.io/projected/c17c822d-8d51-42d0-9cae-7b607f9af79a-kube-api-access-jvkw2\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.162673 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qr7b\" (UniqueName: \"kubernetes.io/projected/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-kube-api-access-9qr7b\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168450 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.170665 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.186978 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.205783 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.224607 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.239146 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240302 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240424 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240456 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240488 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240587 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.240534055 +0000 UTC m=+27.885170815 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240700 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240702 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240719 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240736 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240872 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.240828102 +0000 UTC m=+27.885465032 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240941 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.240898564 +0000 UTC m=+27.885535334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240759 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240967 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240994 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.240988776 +0000 UTC m=+27.885625546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240724 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.241032 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.241126 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.241106129 +0000 UTC m=+27.885742899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.259188 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271576 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.275576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.294460 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.310695 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.322599 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.330249 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.330685 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.342834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: W0202 14:33:44.349315 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc17c822d_8d51_42d0_9cae_7b607f9af79a.slice/crio-3494e8706970212f0405208ae19c7bd1d0a492978519bd8cc8aa2cdc0f67b7a7 WatchSource:0}: Error finding container 3494e8706970212f0405208ae19c7bd1d0a492978519bd8cc8aa2cdc0f67b7a7: Status 404 returned error can't find the container with id 3494e8706970212f0405208ae19c7bd1d0a492978519bd8cc8aa2cdc0f67b7a7 Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.349977 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.353711 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: W0202 14:33:44.363404 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda649255d_23ef_4070_9acc_2adb7d94bc21.slice/crio-202ed05d71cb0717cc85d3bab105a12270594e716af32f856daa82198cefe4d3 WatchSource:0}: Error finding container 202ed05d71cb0717cc85d3bab105a12270594e716af32f856daa82198cefe4d3: Status 404 returned error can't find the container with id 202ed05d71cb0717cc85d3bab105a12270594e716af32f856daa82198cefe4d3 Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.381281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.381625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.381790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.381930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.382044 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.405822 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmsw6"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.406893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.413606 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.413883 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.413888 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.417056 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.417380 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.417590 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.417726 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.421498 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:11:58.491157664 +0000 UTC Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.440102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443633 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443669 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443690 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443855 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443922 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443937 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443958 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443973 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443989 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.444007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.463214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.463714 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.463873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.464034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485836 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.498278 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.522414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546086 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546173 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546189 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546242 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546370 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546518 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546542 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546572 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546600 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546709 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546785 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547634 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547771 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547851 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.550927 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.550886 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551035 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551136 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551126 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.581246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.581747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594773 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594886 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.607890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.641723 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.677255 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.698389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.698706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.699200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.700031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.700133 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.701706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.701825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"202ed05d71cb0717cc85d3bab105a12270594e716af32f856daa82198cefe4d3"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.703239 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7tlsl" event={"ID":"c17c822d-8d51-42d0-9cae-7b607f9af79a","Type":"ContainerStarted","Data":"3494e8706970212f0405208ae19c7bd1d0a492978519bd8cc8aa2cdc0f67b7a7"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.704495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerStarted","Data":"a93e9410ff4a30dfbea3fe2daa15381760bf35e7d117feef1fe49b41f042acf0"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.706126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.706235 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"946593d04c6023c1d85ab29e96459a79ec8edef43fccac3ba1e08fbbc2505fc5"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.706954 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.707158 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.711216 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.735061 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.735390 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.759035 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.773892 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.789445 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804235 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804423 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.824176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.841159 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.857518 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.872870 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.888552 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.903685 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907538 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.921221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.936005 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.952480 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.965114 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.989490 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.004428 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010635 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113867 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.216898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.216968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.216977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.216996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.217007 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319839 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.421686 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:23:50.176941636 +0000 UTC Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422904 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.461805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:45 crc kubenswrapper[4869]: E0202 14:33:45.461993 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526148 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628683 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.710004 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.712774 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.714274 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7tlsl" event={"ID":"c17c822d-8d51-42d0-9cae-7b607f9af79a","Type":"ContainerStarted","Data":"bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.720537 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" exitCode=0 Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.720710 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.720747 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"ca0e0f37b2bf3d240e5eeec5425678446780834f9687e86b8adc4295de855905"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.724716 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc" exitCode=0 Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.724755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731401 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.734407 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.747990 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.763567 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.782514 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.795315 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.812078 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.830596 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.833989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.834043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.834057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.834082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.834099 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.846073 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.865864 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.881816 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.896499 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.911237 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.925170 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.943612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.956445 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.984133 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.999432 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.017457 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.035272 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039439 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.054226 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.068224 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.082966 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.096632 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.117857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143458 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247502 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267241 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267425 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.267387641 +0000 UTC m=+31.912024421 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267897 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267932 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267950 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267962 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267971 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.267958765 +0000 UTC m=+31.912595535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268006 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.267992526 +0000 UTC m=+31.912629296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268030 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268042 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268081 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.268070708 +0000 UTC m=+31.912707568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268084 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268116 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268156 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.26814652 +0000 UTC m=+31.912783370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.300298 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-492m9"] Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.301070 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.303470 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.305727 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.306046 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.310803 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.319074 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.334540 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.350930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.350966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.350975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.350996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.351016 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.354025 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.369469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgx7k\" (UniqueName: \"kubernetes.io/projected/728209c5-b124-458f-b315-306433a62a15-kube-api-access-dgx7k\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.369796 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/728209c5-b124-458f-b315-306433a62a15-serviceca\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.369890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/728209c5-b124-458f-b315-306433a62a15-host\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.373612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.387171 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.401060 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.417849 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.422278 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:20:34.921069818 +0000 UTC Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.432124 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.449396 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454461 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.462302 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.462416 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.462541 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.462781 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.463745 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.470285 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/728209c5-b124-458f-b315-306433a62a15-host\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.470350 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgx7k\" (UniqueName: \"kubernetes.io/projected/728209c5-b124-458f-b315-306433a62a15-kube-api-access-dgx7k\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.470375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/728209c5-b124-458f-b315-306433a62a15-serviceca\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.470412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/728209c5-b124-458f-b315-306433a62a15-host\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.471302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/728209c5-b124-458f-b315-306433a62a15-serviceca\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.477554 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.492634 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.494827 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgx7k\" (UniqueName: \"kubernetes.io/projected/728209c5-b124-458f-b315-306433a62a15-kube-api-access-dgx7k\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.513096 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659619 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.709926 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: W0202 14:33:46.726840 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728209c5_b124_458f_b315_306433a62a15.slice/crio-a78a1411c5a046e47d7c279c7ed978839bb810c063c259ccbceee7e969e9c7e2 WatchSource:0}: Error finding container a78a1411c5a046e47d7c279c7ed978839bb810c063c259ccbceee7e969e9c7e2: Status 404 returned error can't find the container with id a78a1411c5a046e47d7c279c7ed978839bb810c063c259ccbceee7e969e9c7e2 Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.732997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerStarted","Data":"1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.753748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.753802 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.753816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.753828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.763096 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764939 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.779319 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.794129 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.808136 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.828326 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.829303 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.829535 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.833858 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.850305 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.867558 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870246 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.880950 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.902579 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.918557 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.937130 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.950578 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.967228 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.972971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.973022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.973032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.973053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.973067 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.075744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.075786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.075796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.076019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.076038 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286460 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388851 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.422880 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:52:47.331376196 +0000 UTC Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.462536 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:47 crc kubenswrapper[4869]: E0202 14:33:47.462709 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491769 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.573194 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.577956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.584046 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.587321 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594574 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.603250 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.618430 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.631548 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.656896 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.669567 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.682536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.694587 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696596 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.708666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.720541 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.735184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.746818 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.758034 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c" exitCode=0 Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.758086 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.761732 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.761787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.762939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-492m9" event={"ID":"728209c5-b124-458f-b315-306433a62a15","Type":"ContainerStarted","Data":"8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.763009 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-492m9" event={"ID":"728209c5-b124-458f-b315-306433a62a15","Type":"ContainerStarted","Data":"a78a1411c5a046e47d7c279c7ed978839bb810c063c259ccbceee7e969e9c7e2"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.765341 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: E0202 14:33:47.769032 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.777303 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.789530 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.805171 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.817586 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.830030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.852998 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.875211 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.891140 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.903418 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.918805 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.933895 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.951120 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.964819 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.980900 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009174 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112193 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215414 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421677 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.423739 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:33:14.034961264 +0000 UTC Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.462176 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.462229 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:48 crc kubenswrapper[4869]: E0202 14:33:48.462363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:48 crc kubenswrapper[4869]: E0202 14:33:48.462519 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628621 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731257 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.771212 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f" exitCode=0 Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.772148 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.793969 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.810005 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.821571 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834273 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.842461 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.859901 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.873864 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.890345 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.916059 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.928789 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940583 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.944625 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.958277 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.974464 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.991182 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.008182 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045227 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.134703 4869 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.251303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.251727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.251882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.252012 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.252106 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354785 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.424698 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:25:20.582440906 +0000 UTC Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457836 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.462468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:49 crc kubenswrapper[4869]: E0202 14:33:49.462578 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.480544 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.493783 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.516078 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.534440 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.547483 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560752 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.566681 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.582137 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.597365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.612111 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.626778 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.640879 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.655361 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.663819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.663884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.663940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.663982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.664007 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.672434 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.685890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767801 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.778026 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590" exitCode=0 Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.778115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.785853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.799823 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.819228 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.832587 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.851776 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.869246 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871726 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.891767 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.905064 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.920837 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.950238 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975328 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.991335 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.016689 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.037277 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.056953 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.077809 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078666 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181606 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284520 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324054 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324227 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324191302 +0000 UTC m=+39.968828082 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324350 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324450 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324506 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324495529 +0000 UTC m=+39.969132299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324515 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324531 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324547 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324561 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324617 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324621 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324670 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324580 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324570711 +0000 UTC m=+39.969207481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324712 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324703955 +0000 UTC m=+39.969340725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324739 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324731855 +0000 UTC m=+39.969368635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387456 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.425163 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:15:09.469192847 +0000 UTC Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.462088 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.462107 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.462249 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.462427 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489976 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596375 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.699930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.699987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.700003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.700028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.700042 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.794595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerStarted","Data":"5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802367 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.823610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.837747 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.850810 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.865758 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.882318 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.896799 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904881 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.915093 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.932217 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.948426 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.966574 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.982291 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.998290 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.007834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.007903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.007968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.007989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.008003 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.015329 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.028125 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.216548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.216988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.217013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.217037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.217055 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320490 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423500 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.425645 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:06:46.119090824 +0000 UTC Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.462413 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:51 crc kubenswrapper[4869]: E0202 14:33:51.462581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526218 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630333 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.802138 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9" exitCode=0 Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.802226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.809681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.810112 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.823380 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.841191 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.844012 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.859897 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.876722 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.896644 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.913940 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.925136 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939596 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.941611 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.955301 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.980157 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.994938 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.009710 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.020537 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.039239 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042356 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.052237 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.064826 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.084835 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.100241 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.116078 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.129268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146931 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.154232 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.170685 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.185400 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.199332 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.213272 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.225948 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.242489 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249705 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.256847 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352376 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.426809 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:46:01.126607325 +0000 UTC Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455938 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.461958 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.462049 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.462108 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.462235 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634212 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.661803 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666532 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.683482 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.708462 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713552 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.730854 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.735857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.735975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.735987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.736006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.736020 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.752820 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.753020 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.755954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.755993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.756003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.756024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.756036 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.818897 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01" exitCode=0 Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.818991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.819133 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.819716 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.839731 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.849439 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859830 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.860562 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.885244 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.899975 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.913390 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.924703 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.939423 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.956929 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962720 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.974787 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.991162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.009719 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.023597 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.037929 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.048580 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.061304 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.064872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.064945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.064960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.064977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.065011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.072989 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.087638 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.100104 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.113006 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.126147 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.138631 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.151752 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.164421 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177277 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.180411 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.191072 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.204162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.217108 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.238939 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.384949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.385117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.385142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.385213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.385236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.428197 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:14:33.689589829 +0000 UTC Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.462734 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:53 crc kubenswrapper[4869]: E0202 14:33:53.462884 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.596999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.597062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.597076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.597095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.597111 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.802954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.803027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.803037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.803052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.803063 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.827247 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.827802 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerStarted","Data":"919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.841985 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.854455 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.867472 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.886698 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906461 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.909510 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.927502 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.946151 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.961095 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.976758 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.995312 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.008815 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009558 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.025643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.039929 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.061935 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112164 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.214942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.214997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.215007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.215030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.215047 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318829 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318865 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.429438 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:10:17.56891775 +0000 UTC Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.461990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.462104 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:54 crc kubenswrapper[4869]: E0202 14:33:54.462173 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:54 crc kubenswrapper[4869]: E0202 14:33:54.462257 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630483 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733557 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835064 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/0.log" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835517 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.838319 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8" exitCode=1 Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.838640 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.839548 4869 scope.go:117] "RemoveContainer" containerID="6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.853896 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.869894 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.885610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.900989 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.925871 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938544 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.944666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.963218 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.978501 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.992968 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.016395 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.033386 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.072751 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.094153 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.112294 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143972 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247847 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247983 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351257 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.429764 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:47:34.700647307 +0000 UTC Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454345 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.462647 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:55 crc kubenswrapper[4869]: E0202 14:33:55.462835 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557603 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660503 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763684 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.847752 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/0.log" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.850932 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.851031 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866382 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.867507 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.885905 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.899816 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.915941 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.931719 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.947102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.965236 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969712 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.990314 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.006588 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.025139 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.037936 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.052926 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.066763 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072252 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.091365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175764 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279596 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.430991 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:38:40.912780106 +0000 UTC Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.462674 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.462749 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:56 crc kubenswrapper[4869]: E0202 14:33:56.462854 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:56 crc kubenswrapper[4869]: E0202 14:33:56.463018 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486295 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588951 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.691949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.691989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.691999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.692015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.692030 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796282 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.859430 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/1.log" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.860199 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/0.log" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.865063 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" exitCode=1 Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.865145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.865278 4869 scope.go:117] "RemoveContainer" containerID="6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.866117 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:33:56 crc kubenswrapper[4869]: E0202 14:33:56.866351 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.888221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.898818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.898870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.898881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.898898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.899011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.901181 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.918138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.936948 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.953545 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.970815 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.988829 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001826 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.011456 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.025890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.041943 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.056492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.073027 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.088150 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.104961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.105027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.105096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.105124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.105142 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.115257 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.179667 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx"] Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.180269 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.185252 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.185521 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.200952 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208145 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.213750 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.227705 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.239962 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.254888 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.266899 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.278893 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.296166 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.304073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.304144 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.304175 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfznq\" (UniqueName: \"kubernetes.io/projected/7087ae0f-5f9b-4da3-8081-6417819b70e8-kube-api-access-lfznq\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.304221 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311725 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.312930 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.330794 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.354651 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.369973 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.393660 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.404904 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.405023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.405051 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.405087 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfznq\" (UniqueName: \"kubernetes.io/projected/7087ae0f-5f9b-4da3-8081-6417819b70e8-kube-api-access-lfznq\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.406108 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.406791 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.411698 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.413660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.415927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.416086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.416221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.416343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.416427 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.425139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfznq\" (UniqueName: \"kubernetes.io/projected/7087ae0f-5f9b-4da3-8081-6417819b70e8-kube-api-access-lfznq\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.431822 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:26:15.67973754 +0000 UTC Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.434311 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.462003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:57 crc kubenswrapper[4869]: E0202 14:33:57.462578 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.501327 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: W0202 14:33:57.519092 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7087ae0f_5f9b_4da3_8081_6417819b70e8.slice/crio-e570c4326962edbf305b2c0bc39ac3596f4f9dc66a57aab3dab5ce917dfae14e WatchSource:0}: Error finding container e570c4326962edbf305b2c0bc39ac3596f4f9dc66a57aab3dab5ce917dfae14e: Status 404 returned error can't find the container with id e570c4326962edbf305b2c0bc39ac3596f4f9dc66a57aab3dab5ce917dfae14e Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519425 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724453 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.872081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" event={"ID":"7087ae0f-5f9b-4da3-8081-6417819b70e8","Type":"ContainerStarted","Data":"1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.872141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" event={"ID":"7087ae0f-5f9b-4da3-8081-6417819b70e8","Type":"ContainerStarted","Data":"41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.872151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" event={"ID":"7087ae0f-5f9b-4da3-8081-6417819b70e8","Type":"ContainerStarted","Data":"e570c4326962edbf305b2c0bc39ac3596f4f9dc66a57aab3dab5ce917dfae14e"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.875684 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/1.log" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.880398 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:33:57 crc kubenswrapper[4869]: E0202 14:33:57.880715 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.892023 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.910267 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.928192 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931113 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.956559 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.976037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.991548 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.008326 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.022622 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034586 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.038410 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.052420 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.066976 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.081517 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.097225 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.112536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.133588 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.150858 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.165175 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.181347 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.198300 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.212676 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.228164 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240806 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.249624 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.261688 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.278703 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.280370 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qx2qt"] Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.281148 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.281230 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.295061 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.311655 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.328812 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344322 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344264 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.359784 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.373791 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.390139 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.405249 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416342 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416457 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416518 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416614 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416628 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.416585233 +0000 UTC m=+56.061222003 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416687 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.416664205 +0000 UTC m=+56.061301205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fp98\" (UniqueName: \"kubernetes.io/projected/0b597927-2943-4e1a-bac5-1266d539e8f8-kube-api-access-2fp98\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416755 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416981 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417005 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417051 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417083 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.417068815 +0000 UTC m=+56.061705815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417103 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.417094655 +0000 UTC m=+56.061731425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417142 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417174 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417195 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417274 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.417254019 +0000 UTC m=+56.061890979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.420492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.432881 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:15:59.353825783 +0000 UTC Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.435899 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453569 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453604 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.461617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.461776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.461967 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.462109 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.466605 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.485494 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.497868 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.512815 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.517685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fp98\" (UniqueName: \"kubernetes.io/projected/0b597927-2943-4e1a-bac5-1266d539e8f8-kube-api-access-2fp98\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.517750 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.517930 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.518002 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:59.017979301 +0000 UTC m=+40.662616091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.526620 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.533997 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fp98\" (UniqueName: \"kubernetes.io/projected/0b597927-2943-4e1a-bac5-1266d539e8f8-kube-api-access-2fp98\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.538216 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.553389 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556373 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.573409 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.587417 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.606562 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.619254 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659406 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762480 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865165 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967276 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.022118 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:59 crc kubenswrapper[4869]: E0202 14:33:59.022305 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:33:59 crc kubenswrapper[4869]: E0202 14:33:59.022390 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:00.022366288 +0000 UTC m=+41.667003068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071134 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174140 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277430 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.380857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.381361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.381377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.381396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.381409 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.433075 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:29:43.865150132 +0000 UTC Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.463181 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:59 crc kubenswrapper[4869]: E0202 14:33:59.463376 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.463446 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.477761 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483970 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.497541 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.514672 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.533315 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.549165 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.563610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.578881 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586552 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.597620 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.613188 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.630376 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.647447 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.659697 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.678441 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.688900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.688960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.688972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.688991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.689004 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.690599 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.706710 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.721830 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792102 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.889072 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.890868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.891874 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894538 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.906521 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.921633 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.932996 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.947383 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.959760 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.976120 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999248 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.003716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.018867 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.035824 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.035944 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:02.035902224 +0000 UTC m=+43.680538994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.035661 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.037213 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.054184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.072986 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.094073 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.102341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.102821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.102895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.103025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.103097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.111173 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.127425 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.144225 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.158548 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206822 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309820 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412879 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.434275 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:37:47.22688946 +0000 UTC Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.462608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.462761 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.463253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.463339 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.463403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.463472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515724 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618930 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825285 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928122 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031420 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134819 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.237971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.238024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.238037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.238058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.238072 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341713 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.434902 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:37:52.81091372 +0000 UTC Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444655 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.462605 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:01 crc kubenswrapper[4869]: E0202 14:34:01.462838 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548821 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651640 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754544 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857256 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959728 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.058139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.058339 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.058418 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:06.05839793 +0000 UTC m=+47.703034700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.062933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.062980 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.062992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.063011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.063025 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165776 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165900 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268875 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.371864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.371954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.371969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.371991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.372006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.435677 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:09:11.506385089 +0000 UTC Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.461988 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.462014 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.462038 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.462176 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.462305 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.462497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681650 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785138 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.874348 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.875254 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.875441 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888347 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991451 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146690 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.161296 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166630 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.184099 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188738 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.207558 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213666 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.228586 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233710 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.251712 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.251863 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357930 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.436734 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:15:32.396271304 +0000 UTC Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461450 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.462026 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.462153 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564544 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667530 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873740 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977122 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079949 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.182939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.183017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.183054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.183079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.183094 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286203 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389328 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.437315 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:26:02.077379173 +0000 UTC Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.461724 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.461773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.461802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:04 crc kubenswrapper[4869]: E0202 14:34:04.461955 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:04 crc kubenswrapper[4869]: E0202 14:34:04.462173 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:04 crc kubenswrapper[4869]: E0202 14:34:04.462094 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492640 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595176 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698889 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008453 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111622 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214322 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317397 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.421856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.421984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.422029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.422053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.422069 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.438460 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:34:54.945524329 +0000 UTC Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.462071 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:05 crc kubenswrapper[4869]: E0202 14:34:05.462260 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530498 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.633989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.634071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.634084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.634101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.634112 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737285 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839984 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943279 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046180 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.106806 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.107108 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.107229 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.107198696 +0000 UTC m=+55.751835466 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149402 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252272 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.438897 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:56:11.227123977 +0000 UTC Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457556 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.462715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.462805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.462702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.462867 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.463020 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.463126 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.560901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.560996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.561027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.561048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.561064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669741 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877716 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980684 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083531 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186251 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.291866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.291948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.291966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.291993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.292008 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395554 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.439374 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:44:27.096066338 +0000 UTC Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.462070 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:07 crc kubenswrapper[4869]: E0202 14:34:07.462275 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498227 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.601977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.602045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.602058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.602079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.602093 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704962 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.794563 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.807661 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810273 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.816944 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.843467 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.857829 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.872176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.886013 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.903505 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.922602 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926786 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.938243 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.952592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.966780 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.987102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.001082 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.018652 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:08Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029618 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.036791 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:08Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.050380 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:08Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.069363 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:08Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132099 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235343 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338739 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.439509 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:58:50.070035451 +0000 UTC Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441543 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.463743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.463814 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:08 crc kubenswrapper[4869]: E0202 14:34:08.463889 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:08 crc kubenswrapper[4869]: E0202 14:34:08.463986 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.464544 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:08 crc kubenswrapper[4869]: E0202 14:34:08.464768 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.544939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.545354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.545458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.545557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.545674 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648814 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751404 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.854978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.855195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.855226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.855291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.855317 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067867 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171973 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275254 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.440179 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:15:26.550951485 +0000 UTC Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.461787 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:09 crc kubenswrapper[4869]: E0202 14:34:09.461980 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.479957 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.480873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.480988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.481015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.481045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.481062 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.495506 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.516956 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.534692 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.555211 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.573850 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583835 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.587740 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.600805 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.617274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.631487 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.645998 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.662115 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.675460 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689193 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.691131 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.706655 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.720019 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.740086 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792667 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792678 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895925 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999373 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.101961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.102014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.102023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.102038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.102048 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.204941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.204990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.204999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.205011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.205021 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411830 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.441162 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:44:30.181656156 +0000 UTC Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.461676 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.461778 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:10 crc kubenswrapper[4869]: E0202 14:34:10.461831 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:10 crc kubenswrapper[4869]: E0202 14:34:10.462000 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.461792 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:10 crc kubenswrapper[4869]: E0202 14:34:10.462123 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516111 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627800 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.911892 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.932972 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937673 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.947575 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.961308 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.977643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.990964 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.003813 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.020721 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040667 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.042308 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.054468 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.069882 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.089746 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.101425 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.121856 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.142589 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.142870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.142989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.143008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.143035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.143052 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.159482 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.173079 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.188422 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246292 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349324 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.441379 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:45:37.551097107 +0000 UTC Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451979 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.462224 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:11 crc kubenswrapper[4869]: E0202 14:34:11.462419 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554590 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760246 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966219 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.068936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.068998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.069016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.069040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.069059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171967 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275322 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.378976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.379050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.379067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.379097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.379115 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.441892 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:58:43.759919653 +0000 UTC Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.462291 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.462337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.462367 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:12 crc kubenswrapper[4869]: E0202 14:34:12.462432 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:12 crc kubenswrapper[4869]: E0202 14:34:12.462546 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:12 crc kubenswrapper[4869]: E0202 14:34:12.462660 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482841 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585297 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688544 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791974 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895298 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998306 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.101680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.101869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.101899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.101987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.102024 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205188 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.321587 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.348953 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354658 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.371611 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376447 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.397686 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.427349 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.427626 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.443014 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:03:26.601457629 +0000 UTC Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.462612 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.462825 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533667 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533713 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637588 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739802 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843255 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946247 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049393 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.142020 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.142172 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.142231 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:30.142218642 +0000 UTC m=+71.786855412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152642 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256630 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359586 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.443646 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:52:13.157379583 +0000 UTC Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445473 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.445677 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.445641658 +0000 UTC m=+88.090278478 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445742 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.445787 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.445874 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.445845254 +0000 UTC m=+88.090482024 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446037 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446072 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.446064389 +0000 UTC m=+88.090701159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446119 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446161 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446188 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446200 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446266 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446287 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.446263214 +0000 UTC m=+88.090900024 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446295 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446409 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.446374357 +0000 UTC m=+88.091011287 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.461593 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.461740 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.462180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.462271 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.462378 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.462466 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463306 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566504 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773323 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876512 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978945 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.081962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.081998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.082006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.082020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.082030 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184809 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286953 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390593 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.443922 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:41:08.572963372 +0000 UTC Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.462643 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:15 crc kubenswrapper[4869]: E0202 14:34:15.462834 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492804 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698510 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904209 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.006716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.007151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.007160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.007176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.007188 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109298 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213344 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.419983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.420050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.420068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.420097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.420115 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.444494 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:45:10.918340499 +0000 UTC Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.461973 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.462016 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.461973 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:16 crc kubenswrapper[4869]: E0202 14:34:16.462187 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:16 crc kubenswrapper[4869]: E0202 14:34:16.462108 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:16 crc kubenswrapper[4869]: E0202 14:34:16.462292 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522854 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625716 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727855 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830490 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.932955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.932986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.932995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.933008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.933016 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036534 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.242869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.242974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.243000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.243042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.243061 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353803 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353835 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.445363 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:19:32.909034923 +0000 UTC Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457604 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.462073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:17 crc kubenswrapper[4869]: E0202 14:34:17.462237 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.463483 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.560970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.561023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.561037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.561062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.561074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663384 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766568 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766591 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.872941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.872987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.872999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.873016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.873029 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.968114 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/1.log" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.971958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.972544 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976199 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.993633 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:17Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.018555 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.031674 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.044759 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.056623 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.067738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078747 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.087659 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.100966 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.112343 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.128533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.143070 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.156312 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.172666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180847 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180869 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.185365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.198430 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.213145 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.226006 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.386984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.387027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.387036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.387052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.387064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.446329 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:01:50.3857065 +0000 UTC Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.461755 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:18 crc kubenswrapper[4869]: E0202 14:34:18.461930 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.461984 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:18 crc kubenswrapper[4869]: E0202 14:34:18.462168 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.462397 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:18 crc kubenswrapper[4869]: E0202 14:34:18.462596 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489353 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.591273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.591726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.591969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.592127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.592267 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695180 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797971 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900309 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.978525 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/2.log" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.979091 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/1.log" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.980964 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" exitCode=1 Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.981001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.981033 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.981786 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:34:18 crc kubenswrapper[4869]: E0202 14:34:18.981940 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.997189 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008788 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.011021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.027834 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.063506 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.073943 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.087207 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.101168 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111527 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111629 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.124082 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.136857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.150297 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.164757 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.179711 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.193424 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.207550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215880 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.219068 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.230102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421813 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.446719 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:29:05.933843128 +0000 UTC Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.461706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:19 crc kubenswrapper[4869]: E0202 14:34:19.462609 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.485674 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.500054 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.517743 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525147 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.533868 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.547882 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.559899 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.571901 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.586021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.598572 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.613494 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.653556 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.673279 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.685669 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.700091 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.715060 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.728457 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745727 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848695 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951751 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.986968 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/2.log" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.990785 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:34:19 crc kubenswrapper[4869]: E0202 14:34:19.991043 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.002392 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.017274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.028322 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.046480 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055298 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.057353 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.071168 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.085463 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.101550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.121030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.136405 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.151986 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158326 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.170188 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.187035 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.201000 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.213341 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.224232 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.237614 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261828 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.446998 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:55:52.423337191 +0000 UTC Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.462433 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.462508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.462487 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:20 crc kubenswrapper[4869]: E0202 14:34:20.462663 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:20 crc kubenswrapper[4869]: E0202 14:34:20.462806 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:20 crc kubenswrapper[4869]: E0202 14:34:20.462885 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466875 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569834 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673352 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879370 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.090566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.091178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.091294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.091331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.091356 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195304 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298721 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.447440 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:07:09.229156177 +0000 UTC Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.462145 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:21 crc kubenswrapper[4869]: E0202 14:34:21.462419 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.506451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.506844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.506954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.507047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.507136 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609977 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.712821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.712893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.712955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.712989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.713006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816844 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920245 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023379 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126520 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126530 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247434 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350349 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.448218 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:08:36.385090318 +0000 UTC Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453288 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.461617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.461682 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:22 crc kubenswrapper[4869]: E0202 14:34:22.461765 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:22 crc kubenswrapper[4869]: E0202 14:34:22.461895 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.461629 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:22 crc kubenswrapper[4869]: E0202 14:34:22.462001 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557215 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661293 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764543 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867893 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970561 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073700 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178670 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383756 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.448987 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:41:06.469432004 +0000 UTC Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.462652 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.462816 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.494691 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500959 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.515612 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520788 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.533792 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.550166 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554380 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.566977 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.567162 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570637 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673526 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.776867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.776969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.776989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.777023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.777048 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880194 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982450 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085537 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188262 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291252 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393521 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.449320 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:14:29.205079694 +0000 UTC Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.461940 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.462020 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.461943 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:24 crc kubenswrapper[4869]: E0202 14:34:24.462072 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:24 crc kubenswrapper[4869]: E0202 14:34:24.462192 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:24 crc kubenswrapper[4869]: E0202 14:34:24.462276 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495822 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.598953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.599021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.599037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.599052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.599065 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702617 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805462 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.010971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.011047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.011064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.011085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.011101 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.216753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.217100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.217192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.217293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.217386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.319934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.319985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.319998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.320014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.320025 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422622 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.450345 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:49:12.042854659 +0000 UTC Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.461898 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:25 crc kubenswrapper[4869]: E0202 14:34:25.462225 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525070 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628286 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628327 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730843 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833714 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.936945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.937017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.937039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.937063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.937081 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243691 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346178 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449174 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.451423 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:17:27.867635656 +0000 UTC Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.461740 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.461754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:26 crc kubenswrapper[4869]: E0202 14:34:26.461944 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.461754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:26 crc kubenswrapper[4869]: E0202 14:34:26.462052 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:26 crc kubenswrapper[4869]: E0202 14:34:26.462091 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654225 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756488 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858632 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.960864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.960973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.960991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.961014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.961031 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063336 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165771 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267853 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370381 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.452186 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:30:47.344048317 +0000 UTC Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.462591 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:27 crc kubenswrapper[4869]: E0202 14:34:27.462756 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472997 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575766 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.686871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.686968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.686978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.686997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.687006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790132 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892686 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995573 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098717 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304451 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406777 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.453203 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:17:21.437611492 +0000 UTC Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.462696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.462776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:28 crc kubenswrapper[4869]: E0202 14:34:28.462856 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.462779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:28 crc kubenswrapper[4869]: E0202 14:34:28.462943 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:28 crc kubenswrapper[4869]: E0202 14:34:28.463094 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509539 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615499 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718546 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821255 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026945 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131166 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233846 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.453823 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:41:27.259380519 +0000 UTC Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.462767 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:29 crc kubenswrapper[4869]: E0202 14:34:29.462954 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.478993 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.490648 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.502793 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.521581 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.532549 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544186 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.546067 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.562399 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.576473 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.595455 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.614441 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.634268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646950 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.649053 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.663556 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.677393 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.690428 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.699821 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.712303 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749560 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852086 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955721 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060833 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165634 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.199683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.199877 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.199990 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:02.19997051 +0000 UTC m=+103.844607280 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269483 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372845 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.454862 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:56:05.397620587 +0000 UTC Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.462231 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.462381 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.462493 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.462504 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.462625 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.462686 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475676 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578071 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.680974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.681015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.681025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.681046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.681056 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887132 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990568 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195676 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298802 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402184 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.455990 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:25:58.270244649 +0000 UTC Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.462458 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:31 crc kubenswrapper[4869]: E0202 14:34:31.462629 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.504875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.505015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.505041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.505077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.505105 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608770 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711659 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814289 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917533 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020964 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.033260 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/0.log" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.033310 4869 generic.go:334] "Generic (PLEG): container finished" podID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" containerID="b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9" exitCode=1 Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.033344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerDied","Data":"b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.034196 4869 scope.go:117] "RemoveContainer" containerID="b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.047878 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.061085 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.073903 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.089818 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.103544 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.121276 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123773 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.134737 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.146664 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.157342 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.170659 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.182841 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.196424 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.209526 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.222270 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226230 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.237458 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.259942 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.274995 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329189 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431796 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.456221 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:53:43.047334506 +0000 UTC Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.462667 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.462684 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.462710 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:32 crc kubenswrapper[4869]: E0202 14:34:32.462895 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:32 crc kubenswrapper[4869]: E0202 14:34:32.462989 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:32 crc kubenswrapper[4869]: E0202 14:34:32.463051 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535527 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642737 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746597 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851342 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954744 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.040196 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/0.log" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.040352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.057340 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058606 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.073921 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.090565 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.109308 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.125014 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.139619 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.154261 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162280 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.170324 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.185832 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.200070 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.223828 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.239661 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.254183 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265576 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.269343 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.282854 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.299221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.316274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.372781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.373254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.373426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.373588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.373733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.457044 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:32:29.628145708 +0000 UTC Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.462452 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.462620 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.463336 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.463534 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.476770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.477184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.477291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.477406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.477521 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580310 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616462 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.634465 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.658352 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663634 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.682000 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686357 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.702326 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707344 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.721049 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.721752 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724737 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828331 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932232 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.034946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.034995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.035013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.035034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.035047 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138200 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240629 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350417 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.458232 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:09:51.173278978 +0000 UTC Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.462669 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.462669 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.462818 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:34 crc kubenswrapper[4869]: E0202 14:34:34.462998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:34 crc kubenswrapper[4869]: E0202 14:34:34.463127 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:34 crc kubenswrapper[4869]: E0202 14:34:34.463254 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556940 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660592 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.762976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.763056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.763080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.763111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.763132 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176529 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280190 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.458805 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:37:37.528422065 +0000 UTC Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.464198 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:35 crc kubenswrapper[4869]: E0202 14:34:35.464317 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524192 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.626899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.626970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.626984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.627000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.627011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730101 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832239 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935218 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038125 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141598 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.348976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.349014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.349027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.349042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.349054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.451992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.452023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.452032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.452045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.452054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.459801 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:30:25.541871684 +0000 UTC Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.462201 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.462302 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:36 crc kubenswrapper[4869]: E0202 14:34:36.462492 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:36 crc kubenswrapper[4869]: E0202 14:34:36.462319 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.462533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:36 crc kubenswrapper[4869]: E0202 14:34:36.462662 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657525 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761743 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865136 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967776 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070864 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174797 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277728 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380878 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.460749 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:31:10.677485926 +0000 UTC Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.462118 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:37 crc kubenswrapper[4869]: E0202 14:34:37.462277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483832 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587265 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689659 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792787 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896141 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.999808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:37.999954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:37.999975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.000005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.000026 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103399 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206442 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.460948 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:51:08.2786379 +0000 UTC Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.462264 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.462343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:38 crc kubenswrapper[4869]: E0202 14:34:38.462462 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.462482 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:38 crc kubenswrapper[4869]: E0202 14:34:38.462635 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:38 crc kubenswrapper[4869]: E0202 14:34:38.462770 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619466 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927646 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237738 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341250 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.445218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.445983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.446093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.446189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.446281 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.461749 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:20:51.018266935 +0000 UTC Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.463330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:39 crc kubenswrapper[4869]: E0202 14:34:39.463630 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.481404 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.484393 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.504350 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.520146 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.535152 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548738 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.549193 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.564201 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.577621 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.604524 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.619648 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.635190 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.650484 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651520 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.661825 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.677935 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.690882 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.704475 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.718435 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.731476 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754278 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856665 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959488 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166392 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269346 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372183 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.462266 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:32:48.419955988 +0000 UTC Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.462417 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.462493 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.462427 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:40 crc kubenswrapper[4869]: E0202 14:34:40.462551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:40 crc kubenswrapper[4869]: E0202 14:34:40.462640 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:40 crc kubenswrapper[4869]: E0202 14:34:40.462745 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474639 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578371 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.398986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.399085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.399098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.399145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.399162 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.462223 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:41 crc kubenswrapper[4869]: E0202 14:34:41.462442 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.462521 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:39:14.026302732 +0000 UTC Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605647 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917155 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020689 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123301 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225729 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328234 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431102 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.461773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.461811 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:42 crc kubenswrapper[4869]: E0202 14:34:42.461893 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.461773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:42 crc kubenswrapper[4869]: E0202 14:34:42.462023 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:42 crc kubenswrapper[4869]: E0202 14:34:42.462534 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.462604 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:52:45.27814978 +0000 UTC Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534234 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741573 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844404 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948395 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.050949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.051000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.051008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.051023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.051033 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153615 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256832 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359988 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462037 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:43 crc kubenswrapper[4869]: E0202 14:34:43.462177 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462516 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462688 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:37:43.219518513 +0000 UTC Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.564860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.564994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.565014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.565039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.565057 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668543 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772364 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875847 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965106 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: E0202 14:34:43.980151 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984594 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: E0202 14:34:43.998968 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.016649 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.039011 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044242 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.059876 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.060016 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061731 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165127 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267475 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370707 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.462366 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.462546 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.462774 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:29:00.012370873 +0000 UTC Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.462864 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.462970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.463143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.463224 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473551 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576421 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782973 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886550 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.989891 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.990076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.990097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.990130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.990149 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.092902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.092970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.092986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.093006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.093019 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195864 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298615 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400790 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.462510 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.463305 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:24:26.04400733 +0000 UTC Feb 02 14:34:45 crc kubenswrapper[4869]: E0202 14:34:45.464050 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.466121 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.488853 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.503992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.504398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.504412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.504431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.504445 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712634 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917894 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020563 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123415 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226184 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.328898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.328964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.328975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.328990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.329001 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.417349 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/2.log" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.420496 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.421448 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432546 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.446253 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.461501 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.461613 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.461695 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.461779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.461872 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.462043 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.462104 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.463656 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:07:43.563524662 +0000 UTC Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469234 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470180 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.470142383 +0000 UTC m=+152.114779153 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470402 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470425 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470455 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470520 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.470494952 +0000 UTC m=+152.115131722 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470657 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470730 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.470705138 +0000 UTC m=+152.115342098 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470753 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470797 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.47078451 +0000 UTC m=+152.115421510 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.471196 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.471250 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.471264 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.471352 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.471329083 +0000 UTC m=+152.115966033 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.476288 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.493655 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65804f76-1783-4c7e-b1b2-c8b08c84615f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://798c064c352528e1cb858b56d46099dd05d6159b41279b5318a1b9541ee967f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.510522 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.524828 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535595 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.540252 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.553843 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.566997 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.587028 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.613522 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.630450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.646687 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.664892 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.677968 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.697533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ae4835-4a7a-4f35-9a26-1b652269688f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57dbf7eafb53bffd2a0863b3d1677a65d782cafe67265bea4d1e8803a5547224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://554ab58cbf793e782c21583536d2fc9bc092ae81ce121bcb185521e526e0cdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d970fc73d9516f6d1eb7b1e27f9202e0b7236c6efd95c18bc8478b3e50b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e82a3c47da2576ab596a5cf57e45e6c1ae7f3279945b039297fc25ffbf44fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4088257c658a87ac1ae8eaf8b8b2f731f335d37e83598159143d2d4b19eaa14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.710895 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.730342 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742857 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.745714 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846489 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950154 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053691 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156571 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259540 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362581 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.462033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:47 crc kubenswrapper[4869]: E0202 14:34:47.462325 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.463831 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:13:45.076373692 +0000 UTC Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.568978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.569281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.569309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.569338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.569352 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672465 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878150 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982365 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085566 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188639 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291379 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.462541 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:48 crc kubenswrapper[4869]: E0202 14:34:48.462685 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.462754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:48 crc kubenswrapper[4869]: E0202 14:34:48.462811 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.462858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:48 crc kubenswrapper[4869]: E0202 14:34:48.462964 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.464226 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:45:35.836618439 +0000 UTC Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496762 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600220 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703538 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807522 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.910905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.910978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.910988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.911006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.911020 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014517 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117270 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221594 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324217 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.462316 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:49 crc kubenswrapper[4869]: E0202 14:34:49.462458 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.464620 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:07:26.817158798 +0000 UTC Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.480616 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.498184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.512095 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.524500 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532618 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.546604 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ae4835-4a7a-4f35-9a26-1b652269688f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57dbf7eafb53bffd2a0863b3d1677a65d782cafe67265bea4d1e8803a5547224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://554ab58cbf793e782c21583536d2fc9bc092ae81ce121bcb185521e526e0cdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d970fc73d9516f6d1eb7b1e27f9202e0b7236c6efd95c18bc8478b3e50b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e82a3c47da2576ab596a5cf57e45e6c1ae7f3279945b039297fc25ffbf44fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4088257c658a87ac1ae8eaf8b8b2f731f335d37e83598159143d2d4b19eaa14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.556403 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65804f76-1783-4c7e-b1b2-c8b08c84615f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://798c064c352528e1cb858b56d46099dd05d6159b41279b5318a1b9541ee967f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.571266 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.587331 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.609937 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.622576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.634898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635149 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635250 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.648590 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.664807 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.682642 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.696771 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.714414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.728636 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737568 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.743396 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.756939 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840842 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.045968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.046055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.046094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.046117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.046130 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253638 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356295 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460217 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.462391 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.462435 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.462403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:50 crc kubenswrapper[4869]: E0202 14:34:50.462563 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:50 crc kubenswrapper[4869]: E0202 14:34:50.462692 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:50 crc kubenswrapper[4869]: E0202 14:34:50.462741 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.465532 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:42:26.11356667 +0000 UTC Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.563657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.563731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.563746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.564110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.564144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666564 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.871900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.871998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.872015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.872038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.872057 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975152 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078406 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181489 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.284931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.285013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.285030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.285057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.285074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.388642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.389131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.389163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.389194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.389216 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.462412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:51 crc kubenswrapper[4869]: E0202 14:34:51.462611 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.466534 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:10:52.838350978 +0000 UTC Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492440 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.595754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.596174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.596270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.596372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.596481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.699223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.699623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.699817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.700112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.700341 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803872 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906256 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008811 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111156 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213332 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316165 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.419882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.419978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.420000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.420023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.420041 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.462645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.462736 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:52 crc kubenswrapper[4869]: E0202 14:34:52.462877 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.462994 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:52 crc kubenswrapper[4869]: E0202 14:34:52.463086 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:52 crc kubenswrapper[4869]: E0202 14:34:52.463244 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.467002 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:34:09.311906316 +0000 UTC Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624665 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727648 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831424 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934082 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037504 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140254 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243254 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346786 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449683 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.461701 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:53 crc kubenswrapper[4869]: E0202 14:34:53.462015 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.467170 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:46:00.544050706 +0000 UTC Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.552823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.553468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.553687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.553901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.554426 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658155 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761659 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.864576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.865163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.865190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.865218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.865239 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968415 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071656 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175700 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270427 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.294041 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.298978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.299037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.299051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.299071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.299085 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.315227 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321874 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.335686 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340133 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.352005 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358212 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.372220 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.372420 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.375590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.376073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.376205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.376317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.376437 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.462109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.462210 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.462143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.462678 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.462819 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.463000 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.467482 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:03:12.077957872 +0000 UTC Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.479972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.480047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.480064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.480089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.480105 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584212 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893344 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996627 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203518 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.305779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.305832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.305849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.306246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.306267 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408598 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.462694 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:55 crc kubenswrapper[4869]: E0202 14:34:55.462949 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.467629 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:26:05.709142723 +0000 UTC Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512675 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615509 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718533 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.821825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.821953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.821982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.822007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.822024 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925349 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028531 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131576 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240884 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343898 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446803 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.462184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.462210 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.462776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:56 crc kubenswrapper[4869]: E0202 14:34:56.463010 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:56 crc kubenswrapper[4869]: E0202 14:34:56.463262 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:56 crc kubenswrapper[4869]: E0202 14:34:56.463357 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.468717 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:16:25.036134343 +0000 UTC Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550520 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653351 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.756861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.757087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.757120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.757147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.757167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860515 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963593 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.065876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.065961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.065981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.066033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.066054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168976 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271581 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.373959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.374014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.374026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.374048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.374059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.461697 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:57 crc kubenswrapper[4869]: E0202 14:34:57.461902 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.469751 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:44:08.527890612 +0000 UTC Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.476969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.477022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.477032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.477049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.477059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786351 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888938 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888970 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991109 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094496 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198117 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301855 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404767 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.462064 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.462109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:58 crc kubenswrapper[4869]: E0202 14:34:58.462238 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.462328 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:58 crc kubenswrapper[4869]: E0202 14:34:58.462480 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:58 crc kubenswrapper[4869]: E0202 14:34:58.462597 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.470790 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:08:05.264400504 +0000 UTC Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507713 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.610842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.611284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.611587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.611762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.611894 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.714884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.715265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.715346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.715421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.715487 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.817714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.818092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.818184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.818278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.818353 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.921699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.921841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.921875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.921988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.922015 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024540 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127299 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332207 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435363 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.462337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:59 crc kubenswrapper[4869]: E0202 14:34:59.462544 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.471755 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:50:54.928275588 +0000 UTC Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.477999 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.493520 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.508538 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.526132 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537555 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.538538 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.550606 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.558883 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.573949 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.591484 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ae4835-4a7a-4f35-9a26-1b652269688f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57dbf7eafb53bffd2a0863b3d1677a65d782cafe67265bea4d1e8803a5547224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://554ab58cbf793e782c21583536d2fc9bc092ae81ce121bcb185521e526e0cdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d970fc73d9516f6d1eb7b1e27f9202e0b7236c6efd95c18bc8478b3e50b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e82a3c47da2576ab596a5cf57e45e6c1ae7f3279945b039297fc25ffbf44fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4088257c658a87ac1ae8eaf8b8b2f731f335d37e83598159143d2d4b19eaa14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.600750 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65804f76-1783-4c7e-b1b2-c8b08c84615f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://798c064c352528e1cb858b56d46099dd05d6159b41279b5318a1b9541ee967f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.614007 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.625687 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639996 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.641770 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.650405 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.659726 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.670764 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.681657 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.696757 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.708738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742751 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845605 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059203 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162624 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265831 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.462258 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.462376 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:00 crc kubenswrapper[4869]: E0202 14:35:00.462460 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.462507 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:00 crc kubenswrapper[4869]: E0202 14:35:00.462600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:00 crc kubenswrapper[4869]: E0202 14:35:00.462699 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.471872 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:43:32.179007309 +0000 UTC Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472888 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575380 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.678774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.678971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.679014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.679043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.679067 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782510 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885756 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090474 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192506 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295211 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397795 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.462248 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:01 crc kubenswrapper[4869]: E0202 14:35:01.462709 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.472491 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:31:04.100348493 +0000 UTC Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500261 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603206 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.707825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.707973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.707984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.708022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.708041 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811398 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914751 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914842 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.017982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.018028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.018040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.018056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.018068 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.120814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.120902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.120973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.120996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.121011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.223831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.224125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.224194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.224309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.224369 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.264803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.265517 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.265773 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:36:06.265739627 +0000 UTC m=+167.910376437 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.462664 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.462807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.462691 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.462898 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.462999 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.463076 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.472905 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:45:07.448446532 +0000 UTC Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531718 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634664 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737957 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841563 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.915362 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" probeResult="failure" output="" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944928 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048190 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151339 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254838 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357830 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460652 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.462391 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:03 crc kubenswrapper[4869]: E0202 14:35:03.462531 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.473532 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:30:14.781165447 +0000 UTC Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.562997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.563038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.563047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.563061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.563070 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769742 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873639 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976827 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183220 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287357 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.461771 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.461834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:04 crc kubenswrapper[4869]: E0202 14:35:04.462070 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:04 crc kubenswrapper[4869]: E0202 14:35:04.462202 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.462336 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:04 crc kubenswrapper[4869]: E0202 14:35:04.462479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.473673 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:41:54.971958613 +0000 UTC Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582568 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.652108 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg"] Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.652732 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.655244 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.655267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.655410 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.655640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.715696 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.71566982 podStartE2EDuration="1m17.71566982s" podCreationTimestamp="2026-02-02 14:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.715223629 +0000 UTC m=+106.359860419" watchObservedRunningTime="2026-02-02 14:35:04.71566982 +0000 UTC m=+106.360306600" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.716049 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.71604088 podStartE2EDuration="19.71604088s" podCreationTimestamp="2026-02-02 14:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.695429573 +0000 UTC m=+106.340066343" watchObservedRunningTime="2026-02-02 14:35:04.71604088 +0000 UTC m=+106.360677670" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.736183 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-d9vfd" podStartSLOduration=82.736159914 podStartE2EDuration="1m22.736159914s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.733293191 +0000 UTC m=+106.377929961" watchObservedRunningTime="2026-02-02 14:35:04.736159914 +0000 UTC m=+106.380796704" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.768377 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" podStartSLOduration=81.768317806 podStartE2EDuration="1m21.768317806s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.76615863 +0000 UTC m=+106.410795400" watchObservedRunningTime="2026-02-02 14:35:04.768317806 +0000 UTC m=+106.412954606" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.768592 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-492m9" podStartSLOduration=82.768584793 podStartE2EDuration="1m22.768584793s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.749290819 +0000 UTC m=+106.393927599" watchObservedRunningTime="2026-02-02 14:35:04.768584793 +0000 UTC m=+106.413221603" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795686 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35773d6f-75dc-4f55-b843-7153b80a9ce9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795765 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795802 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35773d6f-75dc-4f55-b843-7153b80a9ce9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35773d6f-75dc-4f55-b843-7153b80a9ce9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.825924 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.825874627 podStartE2EDuration="57.825874627s" podCreationTimestamp="2026-02-02 14:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.824122372 +0000 UTC m=+106.468759142" watchObservedRunningTime="2026-02-02 14:35:04.825874627 +0000 UTC m=+106.470511397" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.864440 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.864412662 podStartE2EDuration="25.864412662s" podCreationTimestamp="2026-02-02 14:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.842226765 +0000 UTC m=+106.486863545" watchObservedRunningTime="2026-02-02 14:35:04.864412662 +0000 UTC m=+106.509049432" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.879966 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podStartSLOduration=82.879948479 podStartE2EDuration="1m22.879948479s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.879562919 +0000 UTC m=+106.524199689" watchObservedRunningTime="2026-02-02 14:35:04.879948479 +0000 UTC m=+106.524585249" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897118 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35773d6f-75dc-4f55-b843-7153b80a9ce9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35773d6f-75dc-4f55-b843-7153b80a9ce9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35773d6f-75dc-4f55-b843-7153b80a9ce9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.898562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35773d6f-75dc-4f55-b843-7153b80a9ce9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.911623 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podStartSLOduration=82.911604538 podStartE2EDuration="1m22.911604538s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.910936181 +0000 UTC m=+106.555572971" watchObservedRunningTime="2026-02-02 14:35:04.911604538 +0000 UTC m=+106.556241308" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.914006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35773d6f-75dc-4f55-b843-7153b80a9ce9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.926997 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35773d6f-75dc-4f55-b843-7153b80a9ce9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.969048 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.974959 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7tlsl" podStartSLOduration=82.974938836 podStartE2EDuration="1m22.974938836s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.974562547 +0000 UTC m=+106.619199327" watchObservedRunningTime="2026-02-02 14:35:04.974938836 +0000 UTC m=+106.619575606" Feb 02 14:35:04 crc kubenswrapper[4869]: W0202 14:35:04.983406 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35773d6f_75dc_4f55_b843_7153b80a9ce9.slice/crio-8e284cd0acbd0620166acfd6e9729308b21210d2214cea3bb3f4ad7c37a73ef9 WatchSource:0}: Error finding container 8e284cd0acbd0620166acfd6e9729308b21210d2214cea3bb3f4ad7c37a73ef9: Status 404 returned error can't find the container with id 8e284cd0acbd0620166acfd6e9729308b21210d2214cea3bb3f4ad7c37a73ef9 Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.023419 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-862tl" podStartSLOduration=83.023401185 podStartE2EDuration="1m23.023401185s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:05.005933319 +0000 UTC m=+106.650570109" watchObservedRunningTime="2026-02-02 14:35:05.023401185 +0000 UTC m=+106.668037955" Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.040561 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=83.040532533 podStartE2EDuration="1m23.040532533s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:05.023489837 +0000 UTC m=+106.668126597" watchObservedRunningTime="2026-02-02 14:35:05.040532533 +0000 UTC m=+106.685169303" Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.462777 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:05 crc kubenswrapper[4869]: E0202 14:35:05.463025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.473888 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:08:58.527436964 +0000 UTC Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.474012 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.482808 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.490389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" event={"ID":"35773d6f-75dc-4f55-b843-7153b80a9ce9","Type":"ContainerStarted","Data":"76d5a5f96044e67002795d68db9e260745dea48860dbf17e6ad7116fdc2c0027"} Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.490433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" event={"ID":"35773d6f-75dc-4f55-b843-7153b80a9ce9","Type":"ContainerStarted","Data":"8e284cd0acbd0620166acfd6e9729308b21210d2214cea3bb3f4ad7c37a73ef9"} Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.506294 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" podStartSLOduration=83.506270656 podStartE2EDuration="1m23.506270656s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:05.505990229 +0000 UTC m=+107.150627039" watchObservedRunningTime="2026-02-02 14:35:05.506270656 +0000 UTC m=+107.150907416" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.462535 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.462575 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.462621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:06 crc kubenswrapper[4869]: E0202 14:35:06.462677 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:06 crc kubenswrapper[4869]: E0202 14:35:06.462779 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:06 crc kubenswrapper[4869]: E0202 14:35:06.462942 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.496280 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.497074 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/2.log" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.500659 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" exitCode=1 Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.500717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0"} Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.500772 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.501773 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:06 crc kubenswrapper[4869]: E0202 14:35:06.501998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:35:07 crc kubenswrapper[4869]: I0202 14:35:07.462261 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:07 crc kubenswrapper[4869]: E0202 14:35:07.462400 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:07 crc kubenswrapper[4869]: I0202 14:35:07.506011 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:35:08 crc kubenswrapper[4869]: I0202 14:35:08.462205 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:08 crc kubenswrapper[4869]: I0202 14:35:08.462288 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:08 crc kubenswrapper[4869]: E0202 14:35:08.462327 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:08 crc kubenswrapper[4869]: I0202 14:35:08.462205 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:08 crc kubenswrapper[4869]: E0202 14:35:08.462424 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:08 crc kubenswrapper[4869]: E0202 14:35:08.462581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:09 crc kubenswrapper[4869]: I0202 14:35:09.461823 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:09 crc kubenswrapper[4869]: E0202 14:35:09.465642 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:10 crc kubenswrapper[4869]: I0202 14:35:10.462504 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:10 crc kubenswrapper[4869]: E0202 14:35:10.462683 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:10 crc kubenswrapper[4869]: I0202 14:35:10.462758 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:10 crc kubenswrapper[4869]: I0202 14:35:10.462764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:10 crc kubenswrapper[4869]: E0202 14:35:10.462846 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:10 crc kubenswrapper[4869]: E0202 14:35:10.463044 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:11 crc kubenswrapper[4869]: I0202 14:35:11.462103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:11 crc kubenswrapper[4869]: E0202 14:35:11.462270 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:12 crc kubenswrapper[4869]: I0202 14:35:12.461960 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:12 crc kubenswrapper[4869]: I0202 14:35:12.462103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:12 crc kubenswrapper[4869]: I0202 14:35:12.462112 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:12 crc kubenswrapper[4869]: E0202 14:35:12.462738 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:12 crc kubenswrapper[4869]: E0202 14:35:12.462903 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:12 crc kubenswrapper[4869]: E0202 14:35:12.463149 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:13 crc kubenswrapper[4869]: I0202 14:35:13.463376 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:13 crc kubenswrapper[4869]: E0202 14:35:13.463623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:14 crc kubenswrapper[4869]: I0202 14:35:14.462244 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:14 crc kubenswrapper[4869]: E0202 14:35:14.462456 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:14 crc kubenswrapper[4869]: I0202 14:35:14.462696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:14 crc kubenswrapper[4869]: I0202 14:35:14.462739 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:14 crc kubenswrapper[4869]: E0202 14:35:14.462818 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:14 crc kubenswrapper[4869]: E0202 14:35:14.463049 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:15 crc kubenswrapper[4869]: I0202 14:35:15.462789 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:15 crc kubenswrapper[4869]: E0202 14:35:15.463090 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:16 crc kubenswrapper[4869]: I0202 14:35:16.461817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:16 crc kubenswrapper[4869]: I0202 14:35:16.461878 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:16 crc kubenswrapper[4869]: I0202 14:35:16.461944 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:16 crc kubenswrapper[4869]: E0202 14:35:16.461997 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:16 crc kubenswrapper[4869]: E0202 14:35:16.462129 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:16 crc kubenswrapper[4869]: E0202 14:35:16.462295 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:17 crc kubenswrapper[4869]: I0202 14:35:17.463044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:17 crc kubenswrapper[4869]: I0202 14:35:17.463896 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:17 crc kubenswrapper[4869]: E0202 14:35:17.464147 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:35:17 crc kubenswrapper[4869]: E0202 14:35:17.464713 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.462418 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.462452 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.462583 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:18 crc kubenswrapper[4869]: E0202 14:35:18.462737 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:18 crc kubenswrapper[4869]: E0202 14:35:18.462862 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:18 crc kubenswrapper[4869]: E0202 14:35:18.463074 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.545209 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/1.log" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.546464 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/0.log" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.546526 4869 generic.go:334] "Generic (PLEG): container finished" podID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" containerID="e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a" exitCode=1 Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.546574 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerDied","Data":"e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a"} Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.546636 4869 scope.go:117] "RemoveContainer" containerID="b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.547070 4869 scope.go:117] "RemoveContainer" containerID="e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a" Feb 02 14:35:18 crc kubenswrapper[4869]: E0202 14:35:18.547266 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-d9vfd_openshift-multus(45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0)\"" pod="openshift-multus/multus-d9vfd" podUID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" Feb 02 14:35:19 crc kubenswrapper[4869]: I0202 14:35:19.462233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:19 crc kubenswrapper[4869]: E0202 14:35:19.463440 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:19 crc kubenswrapper[4869]: E0202 14:35:19.470986 4869 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 14:35:19 crc kubenswrapper[4869]: I0202 14:35:19.554356 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/1.log" Feb 02 14:35:19 crc kubenswrapper[4869]: E0202 14:35:19.570488 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:20 crc kubenswrapper[4869]: I0202 14:35:20.461879 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:20 crc kubenswrapper[4869]: I0202 14:35:20.462033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:20 crc kubenswrapper[4869]: E0202 14:35:20.462175 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:20 crc kubenswrapper[4869]: I0202 14:35:20.462271 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:20 crc kubenswrapper[4869]: E0202 14:35:20.462334 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:20 crc kubenswrapper[4869]: E0202 14:35:20.462454 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:21 crc kubenswrapper[4869]: I0202 14:35:21.462159 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:21 crc kubenswrapper[4869]: E0202 14:35:21.463146 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:22 crc kubenswrapper[4869]: I0202 14:35:22.462488 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:22 crc kubenswrapper[4869]: I0202 14:35:22.462609 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:22 crc kubenswrapper[4869]: E0202 14:35:22.462733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:22 crc kubenswrapper[4869]: I0202 14:35:22.462849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:22 crc kubenswrapper[4869]: E0202 14:35:22.462992 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:22 crc kubenswrapper[4869]: E0202 14:35:22.463143 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:23 crc kubenswrapper[4869]: I0202 14:35:23.462298 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:23 crc kubenswrapper[4869]: E0202 14:35:23.462464 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:24 crc kubenswrapper[4869]: I0202 14:35:24.462645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:24 crc kubenswrapper[4869]: I0202 14:35:24.462645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:24 crc kubenswrapper[4869]: I0202 14:35:24.462634 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:24 crc kubenswrapper[4869]: E0202 14:35:24.462813 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:24 crc kubenswrapper[4869]: E0202 14:35:24.463025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:24 crc kubenswrapper[4869]: E0202 14:35:24.463162 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:24 crc kubenswrapper[4869]: E0202 14:35:24.572284 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:25 crc kubenswrapper[4869]: I0202 14:35:25.462309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:25 crc kubenswrapper[4869]: E0202 14:35:25.462510 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:26 crc kubenswrapper[4869]: I0202 14:35:26.462068 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:26 crc kubenswrapper[4869]: E0202 14:35:26.462290 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:26 crc kubenswrapper[4869]: I0202 14:35:26.462100 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:26 crc kubenswrapper[4869]: I0202 14:35:26.462076 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:26 crc kubenswrapper[4869]: E0202 14:35:26.462429 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:26 crc kubenswrapper[4869]: E0202 14:35:26.462797 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:27 crc kubenswrapper[4869]: I0202 14:35:27.462702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:27 crc kubenswrapper[4869]: E0202 14:35:27.463085 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:28 crc kubenswrapper[4869]: I0202 14:35:28.462066 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:28 crc kubenswrapper[4869]: I0202 14:35:28.462147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:28 crc kubenswrapper[4869]: I0202 14:35:28.462266 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:28 crc kubenswrapper[4869]: E0202 14:35:28.462307 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:28 crc kubenswrapper[4869]: E0202 14:35:28.462652 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:28 crc kubenswrapper[4869]: E0202 14:35:28.462738 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:28 crc kubenswrapper[4869]: I0202 14:35:28.463391 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:28 crc kubenswrapper[4869]: E0202 14:35:28.463612 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:35:29 crc kubenswrapper[4869]: I0202 14:35:29.462346 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:29 crc kubenswrapper[4869]: E0202 14:35:29.463845 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:29 crc kubenswrapper[4869]: E0202 14:35:29.573010 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:30 crc kubenswrapper[4869]: I0202 14:35:30.462458 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:30 crc kubenswrapper[4869]: I0202 14:35:30.462569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:30 crc kubenswrapper[4869]: E0202 14:35:30.462624 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:30 crc kubenswrapper[4869]: E0202 14:35:30.462754 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:30 crc kubenswrapper[4869]: I0202 14:35:30.462569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:30 crc kubenswrapper[4869]: E0202 14:35:30.462872 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:31 crc kubenswrapper[4869]: I0202 14:35:31.462147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:31 crc kubenswrapper[4869]: E0202 14:35:31.462326 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:31 crc kubenswrapper[4869]: I0202 14:35:31.462618 4869 scope.go:117] "RemoveContainer" containerID="e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.461746 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:32 crc kubenswrapper[4869]: E0202 14:35:32.462453 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.461990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:32 crc kubenswrapper[4869]: E0202 14:35:32.462551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.461931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:32 crc kubenswrapper[4869]: E0202 14:35:32.462725 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.602137 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/1.log" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.602217 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9"} Feb 02 14:35:33 crc kubenswrapper[4869]: I0202 14:35:33.461836 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:33 crc kubenswrapper[4869]: E0202 14:35:33.462045 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:34 crc kubenswrapper[4869]: I0202 14:35:34.462527 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:34 crc kubenswrapper[4869]: I0202 14:35:34.462652 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:34 crc kubenswrapper[4869]: I0202 14:35:34.462713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:34 crc kubenswrapper[4869]: E0202 14:35:34.462737 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:34 crc kubenswrapper[4869]: E0202 14:35:34.462833 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:34 crc kubenswrapper[4869]: E0202 14:35:34.462943 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:34 crc kubenswrapper[4869]: E0202 14:35:34.575230 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:35 crc kubenswrapper[4869]: I0202 14:35:35.462012 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:35 crc kubenswrapper[4869]: E0202 14:35:35.462219 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:36 crc kubenswrapper[4869]: I0202 14:35:36.462498 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:36 crc kubenswrapper[4869]: I0202 14:35:36.462693 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:36 crc kubenswrapper[4869]: I0202 14:35:36.462803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:36 crc kubenswrapper[4869]: E0202 14:35:36.462732 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:36 crc kubenswrapper[4869]: E0202 14:35:36.463022 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:36 crc kubenswrapper[4869]: E0202 14:35:36.463181 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:37 crc kubenswrapper[4869]: I0202 14:35:37.461824 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:37 crc kubenswrapper[4869]: E0202 14:35:37.462074 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:38 crc kubenswrapper[4869]: I0202 14:35:38.461899 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:38 crc kubenswrapper[4869]: I0202 14:35:38.462065 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:38 crc kubenswrapper[4869]: I0202 14:35:38.462146 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:38 crc kubenswrapper[4869]: E0202 14:35:38.462155 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:38 crc kubenswrapper[4869]: E0202 14:35:38.462281 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:38 crc kubenswrapper[4869]: E0202 14:35:38.462557 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:39 crc kubenswrapper[4869]: I0202 14:35:39.462151 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:39 crc kubenswrapper[4869]: E0202 14:35:39.463674 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:39 crc kubenswrapper[4869]: E0202 14:35:39.575901 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:40 crc kubenswrapper[4869]: I0202 14:35:40.462726 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:40 crc kubenswrapper[4869]: I0202 14:35:40.462858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:40 crc kubenswrapper[4869]: I0202 14:35:40.462980 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:40 crc kubenswrapper[4869]: E0202 14:35:40.463038 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:40 crc kubenswrapper[4869]: E0202 14:35:40.463159 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:40 crc kubenswrapper[4869]: E0202 14:35:40.463448 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:41 crc kubenswrapper[4869]: I0202 14:35:41.462449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:41 crc kubenswrapper[4869]: E0202 14:35:41.462621 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:42 crc kubenswrapper[4869]: I0202 14:35:42.462633 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:42 crc kubenswrapper[4869]: I0202 14:35:42.462657 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:42 crc kubenswrapper[4869]: E0202 14:35:42.463627 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:42 crc kubenswrapper[4869]: I0202 14:35:42.462688 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:42 crc kubenswrapper[4869]: E0202 14:35:42.463720 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:42 crc kubenswrapper[4869]: E0202 14:35:42.463852 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:43 crc kubenswrapper[4869]: I0202 14:35:43.462672 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:43 crc kubenswrapper[4869]: E0202 14:35:43.463179 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:43 crc kubenswrapper[4869]: I0202 14:35:43.464020 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:43 crc kubenswrapper[4869]: E0202 14:35:43.464194 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:35:44 crc kubenswrapper[4869]: I0202 14:35:44.461725 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:44 crc kubenswrapper[4869]: I0202 14:35:44.461779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:44 crc kubenswrapper[4869]: I0202 14:35:44.461941 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:44 crc kubenswrapper[4869]: E0202 14:35:44.461984 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:44 crc kubenswrapper[4869]: E0202 14:35:44.462049 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:44 crc kubenswrapper[4869]: E0202 14:35:44.462134 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:44 crc kubenswrapper[4869]: E0202 14:35:44.577213 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:45 crc kubenswrapper[4869]: I0202 14:35:45.304439 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:35:45 crc kubenswrapper[4869]: I0202 14:35:45.304532 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:35:45 crc kubenswrapper[4869]: I0202 14:35:45.462501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:45 crc kubenswrapper[4869]: E0202 14:35:45.463034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:46 crc kubenswrapper[4869]: I0202 14:35:46.461675 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:46 crc kubenswrapper[4869]: I0202 14:35:46.461736 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:46 crc kubenswrapper[4869]: I0202 14:35:46.461772 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:46 crc kubenswrapper[4869]: E0202 14:35:46.461845 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:46 crc kubenswrapper[4869]: E0202 14:35:46.462054 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:46 crc kubenswrapper[4869]: E0202 14:35:46.462185 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:47 crc kubenswrapper[4869]: I0202 14:35:47.462753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:47 crc kubenswrapper[4869]: E0202 14:35:47.463970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:48 crc kubenswrapper[4869]: I0202 14:35:48.462453 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:48 crc kubenswrapper[4869]: I0202 14:35:48.462617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:48 crc kubenswrapper[4869]: E0202 14:35:48.462674 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:48 crc kubenswrapper[4869]: E0202 14:35:48.462818 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:48 crc kubenswrapper[4869]: I0202 14:35:48.462447 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:48 crc kubenswrapper[4869]: E0202 14:35:48.462960 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:49 crc kubenswrapper[4869]: I0202 14:35:49.462348 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:49 crc kubenswrapper[4869]: E0202 14:35:49.463699 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:49 crc kubenswrapper[4869]: E0202 14:35:49.577845 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.462102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.462236 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.462309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.462441 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.462575 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.462735 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553465 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553532 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.553484597 +0000 UTC m=+274.198121377 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553585 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553597 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553653 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.55363403 +0000 UTC m=+274.198270810 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553691 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553854 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553900 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553966 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.553954529 +0000 UTC m=+274.198591309 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553983 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553984 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554007 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554034 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554060 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554103 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.554076432 +0000 UTC m=+274.198713242 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554168 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.554137823 +0000 UTC m=+274.198774643 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:35:51 crc kubenswrapper[4869]: I0202 14:35:51.461863 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:51 crc kubenswrapper[4869]: E0202 14:35:51.462092 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:52 crc kubenswrapper[4869]: I0202 14:35:52.462531 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:52 crc kubenswrapper[4869]: I0202 14:35:52.462627 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:52 crc kubenswrapper[4869]: E0202 14:35:52.462751 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:52 crc kubenswrapper[4869]: I0202 14:35:52.462819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:52 crc kubenswrapper[4869]: E0202 14:35:52.463021 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:52 crc kubenswrapper[4869]: E0202 14:35:52.463270 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:53 crc kubenswrapper[4869]: I0202 14:35:53.462159 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:53 crc kubenswrapper[4869]: E0202 14:35:53.462340 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:54 crc kubenswrapper[4869]: I0202 14:35:54.461777 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:54 crc kubenswrapper[4869]: I0202 14:35:54.461834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:54 crc kubenswrapper[4869]: I0202 14:35:54.461790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:54 crc kubenswrapper[4869]: E0202 14:35:54.462048 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:54 crc kubenswrapper[4869]: E0202 14:35:54.462146 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:54 crc kubenswrapper[4869]: E0202 14:35:54.462276 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:54 crc kubenswrapper[4869]: E0202 14:35:54.579941 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:55 crc kubenswrapper[4869]: I0202 14:35:55.462508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:55 crc kubenswrapper[4869]: E0202 14:35:55.462713 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:56 crc kubenswrapper[4869]: I0202 14:35:56.461875 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:56 crc kubenswrapper[4869]: I0202 14:35:56.462017 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:56 crc kubenswrapper[4869]: E0202 14:35:56.462120 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:56 crc kubenswrapper[4869]: I0202 14:35:56.462138 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:56 crc kubenswrapper[4869]: E0202 14:35:56.462288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:56 crc kubenswrapper[4869]: E0202 14:35:56.462513 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:57 crc kubenswrapper[4869]: I0202 14:35:57.462258 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:57 crc kubenswrapper[4869]: E0202 14:35:57.462489 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.462332 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.462500 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:58 crc kubenswrapper[4869]: E0202 14:35:58.462586 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:58 crc kubenswrapper[4869]: E0202 14:35:58.462683 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.463051 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:58 crc kubenswrapper[4869]: E0202 14:35:58.463147 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.463511 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.700603 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.705247 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.706783 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:35:59 crc kubenswrapper[4869]: I0202 14:35:59.462299 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:59 crc kubenswrapper[4869]: E0202 14:35:59.462880 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:59 crc kubenswrapper[4869]: I0202 14:35:59.546159 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qx2qt"] Feb 02 14:35:59 crc kubenswrapper[4869]: I0202 14:35:59.546372 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:59 crc kubenswrapper[4869]: E0202 14:35:59.546522 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:59 crc kubenswrapper[4869]: E0202 14:35:59.580452 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:36:00 crc kubenswrapper[4869]: I0202 14:36:00.462354 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:36:00 crc kubenswrapper[4869]: I0202 14:36:00.462540 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:36:00 crc kubenswrapper[4869]: E0202 14:36:00.462623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:36:00 crc kubenswrapper[4869]: E0202 14:36:00.462698 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:36:01 crc kubenswrapper[4869]: I0202 14:36:01.461799 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:01 crc kubenswrapper[4869]: I0202 14:36:01.461933 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:36:01 crc kubenswrapper[4869]: E0202 14:36:01.462076 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:36:01 crc kubenswrapper[4869]: E0202 14:36:01.462144 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:36:02 crc kubenswrapper[4869]: I0202 14:36:02.462754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:36:02 crc kubenswrapper[4869]: I0202 14:36:02.462869 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:36:02 crc kubenswrapper[4869]: E0202 14:36:02.463130 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:36:02 crc kubenswrapper[4869]: E0202 14:36:02.463341 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:36:03 crc kubenswrapper[4869]: I0202 14:36:03.462444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:03 crc kubenswrapper[4869]: E0202 14:36:03.462753 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:36:03 crc kubenswrapper[4869]: I0202 14:36:03.462882 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:36:03 crc kubenswrapper[4869]: E0202 14:36:03.463249 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:36:04 crc kubenswrapper[4869]: I0202 14:36:04.462347 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:36:04 crc kubenswrapper[4869]: I0202 14:36:04.462356 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:36:04 crc kubenswrapper[4869]: E0202 14:36:04.462578 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:36:04 crc kubenswrapper[4869]: E0202 14:36:04.462680 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.462264 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.462272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.465458 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.466389 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.467060 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.469463 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.498841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.544016 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.544578 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.550591 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.550887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.550957 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.551007 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.551071 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.551354 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.562688 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pm4x8"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.563510 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.563946 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.564206 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.565362 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.566140 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.567971 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.569075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.569198 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4hhbx"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.570603 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.571901 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.572506 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.575564 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.583134 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.583520 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.583757 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.584018 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x5lbr"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.584616 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.605139 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zqdwm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.606091 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.606423 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.606928 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.607163 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.607648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.607962 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.608466 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.608663 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.608833 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.608997 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609029 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609257 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609403 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609420 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609452 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609570 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609735 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609773 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609408 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609885 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609890 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609998 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610010 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609780 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610110 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610153 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610211 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610299 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610337 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610379 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610467 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610523 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610559 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610450 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610661 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610736 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610757 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610873 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610684 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.611011 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.611043 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.611076 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.611228 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612002 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612269 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612436 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612622 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612954 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.613485 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.614180 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.615147 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxvvv"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.615976 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.616162 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.616820 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.617110 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.617357 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.619081 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.635143 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.637576 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.637993 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whptb"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.638924 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.639307 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.639495 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.639976 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.640421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.640978 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641489 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641517 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641583 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641675 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641883 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.642078 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.642582 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643543 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643612 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae3c559-c92e-45a1-8e66-383dee4460cd-serving-cert\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643651 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57glr\" (UniqueName: \"kubernetes.io/projected/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-kube-api-access-57glr\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643770 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w927m\" (UniqueName: \"kubernetes.io/projected/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-kube-api-access-w927m\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9922f280-ff61-424a-a336-769c0cfb5da2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.644566 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.644608 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.644638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.649132 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.653106 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.653377 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.653608 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.655298 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m44c2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.655848 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.682386 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.682650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.683099 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.684143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.684794 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.685499 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686036 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-audit\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686143 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-policies\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686425 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686485 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-audit-dir\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-encryption-config\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686547 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-client\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-image-import-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686670 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-machine-approver-tls\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686727 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686788 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9922f280-ff61-424a-a336-769c0cfb5da2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686819 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-serving-cert\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687081 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngwl\" (UniqueName: \"kubernetes.io/projected/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-kube-api-access-pngwl\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-config\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687144 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf59w\" (UniqueName: \"kubernetes.io/projected/9922f280-ff61-424a-a336-769c0cfb5da2-kube-api-access-rf59w\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687266 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-auth-proxy-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687292 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687482 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-service-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687557 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-serving-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fb104b8-53b8-45dd-8406-206d6ba5a250-metrics-tls\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-encryption-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687693 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687744 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-serving-cert\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksd68\" (UniqueName: \"kubernetes.io/projected/dae3c559-c92e-45a1-8e66-383dee4460cd-kube-api-access-ksd68\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687937 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687974 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-node-pullsecrets\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-dir\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688151 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-client\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688179 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/0fb104b8-53b8-45dd-8406-206d6ba5a250-kube-api-access-797zm\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688346 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4svkg\" (UniqueName: \"kubernetes.io/projected/78130644-70b6-4285-9ca7-e5a671bd1111-kube-api-access-4svkg\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688428 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688455 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688499 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688531 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688769 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.693158 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.693553 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.694567 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.709957 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.710393 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.711137 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.715232 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.717654 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.717835 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.718007 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.719158 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.719412 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.719942 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.720545 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.722986 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.724936 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.725479 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.725780 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9cvf"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.726303 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.726663 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.727105 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.727456 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.727658 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.727828 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.728104 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.728410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.730357 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.730572 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.730708 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.730843 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.732489 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.732517 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.732656 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.732766 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.733110 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.733223 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.733462 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.733545 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.734189 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.734700 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.734786 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.734848 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.735107 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.735281 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.735442 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.736184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.736415 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.736855 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.738584 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.738780 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.739573 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.739749 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.740357 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.740670 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.740851 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.742409 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.742731 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.745603 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8vv5"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.745745 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.746369 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.747016 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.747410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.749360 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.749799 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.782687 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.783362 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.784721 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pm4x8"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.786393 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.788501 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790127 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790153 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790173 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790236 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae3c559-c92e-45a1-8e66-383dee4460cd-serving-cert\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790303 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790362 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w927m\" (UniqueName: \"kubernetes.io/projected/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-kube-api-access-w927m\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57glr\" (UniqueName: \"kubernetes.io/projected/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-kube-api-access-57glr\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790429 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790457 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9922f280-ff61-424a-a336-769c0cfb5da2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790483 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-audit\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-policies\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790644 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-client\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-image-import-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-audit-dir\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-encryption-config\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790806 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-machine-approver-tls\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790865 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9922f280-ff61-424a-a336-769c0cfb5da2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791002 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-serving-cert\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791046 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngwl\" (UniqueName: \"kubernetes.io/projected/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-kube-api-access-pngwl\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791070 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-config\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791086 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791105 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791127 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf59w\" (UniqueName: \"kubernetes.io/projected/9922f280-ff61-424a-a336-769c0cfb5da2-kube-api-access-rf59w\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791146 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791166 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-service-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-service-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.792474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.792891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-auth-proxy-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793168 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-serving-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fb104b8-53b8-45dd-8406-206d6ba5a250-metrics-tls\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793070 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793376 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-encryption-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793659 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-auth-proxy-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793810 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793873 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.794252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.795029 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-serving-cert\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.795081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.795711 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.796619 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-serving-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.796704 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.797031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.797067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksd68\" (UniqueName: \"kubernetes.io/projected/dae3c559-c92e-45a1-8e66-383dee4460cd-kube-api-access-ksd68\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.797387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.797468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-node-pullsecrets\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.798793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-dir\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.798830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-client\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.798856 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.799085 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-dir\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.799356 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.799395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.800431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.800997 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4svkg\" (UniqueName: \"kubernetes.io/projected/78130644-70b6-4285-9ca7-e5a671bd1111-kube-api-access-4svkg\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.801037 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/0fb104b8-53b8-45dd-8406-206d6ba5a250-kube-api-access-797zm\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.801502 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fb104b8-53b8-45dd-8406-206d6ba5a250-metrics-tls\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.802307 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.802795 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.803093 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-client\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.803178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.803125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae3c559-c92e-45a1-8e66-383dee4460cd-serving-cert\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805304 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-snfqj"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805649 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805768 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-audit\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806030 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806835 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-policies\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.808074 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.808463 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.808560 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-node-pullsecrets\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.809207 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.809260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9922f280-ff61-424a-a336-769c0cfb5da2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.809660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-config\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.809976 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9922f280-ff61-424a-a336-769c0cfb5da2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.810018 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.810147 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.811305 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.811408 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812137 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-serving-cert\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812473 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-encryption-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812505 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812549 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-audit-dir\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.813204 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.813406 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-encryption-config\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.813393 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.813567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-image-import-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.814125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-client\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.814864 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-serving-cert\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.814938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.814976 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mcwnk"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.815284 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.816161 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.816538 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.816951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.817007 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.818995 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxvvv"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.819474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-machine-approver-tls\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.819650 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.821450 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.824292 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x5lbr"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.824332 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.825813 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zqdwm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.827350 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.827462 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.828672 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.829716 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.830886 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.831891 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.833345 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z4jh5"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.834515 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.834651 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.835436 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.836837 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.837376 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.839193 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.839365 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.841311 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.843214 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.845600 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9cvf"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.846236 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.847447 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m44c2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.848711 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.850145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4hhbx"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.851592 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whptb"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.853000 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.854298 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.854828 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.857579 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.860015 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.860558 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z4jh5"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.862784 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mcwnk"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.865288 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8vv5"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.866612 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.867102 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.868751 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.871356 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.873046 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdq4v"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.875230 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdq4v"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.875247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.876922 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-245rt"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.877435 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.886631 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.918851 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.926518 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.947027 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.967386 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.987612 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.007291 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.027834 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.047887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.067604 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.087103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.107886 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.127307 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.148034 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.168494 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.187058 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.208216 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.228267 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.249106 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.267225 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.287490 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.306683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.308124 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.310255 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.327301 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.348089 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.368002 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.403711 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.417440 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.427571 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.449850 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.462050 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.462043 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.468435 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.488045 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.507887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.528062 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.548350 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.567826 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.588065 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.608346 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.628171 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.637628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qx2qt"] Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.648516 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.667403 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.689133 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.708529 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.726823 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.738645 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" event={"ID":"0b597927-2943-4e1a-bac5-1266d539e8f8","Type":"ContainerStarted","Data":"7dc6b95db8ef40ca28ca26cbe5cd5e850dbec7e4b3d376ce0c91dcc6c8cb82b0"} Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.746160 4869 request.go:700] Waited for 1.017402838s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.748361 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.767766 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.787065 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.808563 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.828955 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.847105 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.887725 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.908542 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.928341 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.948305 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.967838 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.986967 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.008050 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.027181 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.047826 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.068404 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.088319 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.114879 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.128033 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.147146 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.166732 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.188229 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.208188 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.253584 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.267610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.288196 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.289819 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksd68\" (UniqueName: \"kubernetes.io/projected/dae3c559-c92e-45a1-8e66-383dee4460cd-kube-api-access-ksd68\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.321395 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.337007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57glr\" (UniqueName: \"kubernetes.io/projected/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-kube-api-access-57glr\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.363194 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.364219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w927m\" (UniqueName: \"kubernetes.io/projected/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-kube-api-access-w927m\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.371881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/0fb104b8-53b8-45dd-8406-206d6ba5a250-kube-api-access-797zm\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.380980 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.381705 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.387651 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4svkg\" (UniqueName: \"kubernetes.io/projected/78130644-70b6-4285-9ca7-e5a671bd1111-kube-api-access-4svkg\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.406449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngwl\" (UniqueName: \"kubernetes.io/projected/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-kube-api-access-pngwl\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.407850 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: W0202 14:36:07.411075 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bef80e9_27d1_43c4_9a1f_4a86b2effe23.slice/crio-76aa7562aa54b7cf851bdfa539174e1a5d61390b4d0163ac290903646d675bd6 WatchSource:0}: Error finding container 76aa7562aa54b7cf851bdfa539174e1a5d61390b4d0163ac290903646d675bd6: Status 404 returned error can't find the container with id 76aa7562aa54b7cf851bdfa539174e1a5d61390b4d0163ac290903646d675bd6 Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.414167 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.428351 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.433873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.448500 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.468237 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.489881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.494456 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.507927 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.528325 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.551923 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.552621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.568184 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.569229 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.571642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf59w\" (UniqueName: \"kubernetes.io/projected/9922f280-ff61-424a-a336-769c0cfb5da2-kube-api-access-rf59w\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.588058 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.593318 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:36:07 crc kubenswrapper[4869]: W0202 14:36:07.610002 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod992c2b96_5783_4865_a47d_167caf91e241.slice/crio-92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365 WatchSource:0}: Error finding container 92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365: Status 404 returned error can't find the container with id 92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365 Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.614830 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.628425 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.648015 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.650303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.668795 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.683579 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x5lbr"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.688137 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.708634 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.728187 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.748217 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.765125 4869 request.go:700] Waited for 1.88742985s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.768399 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.789603 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" event={"ID":"0b597927-2943-4e1a-bac5-1266d539e8f8","Type":"ContainerStarted","Data":"d0d20fb4b187a12a2a79cba7bb06c0a5f41f9056f50e4b03ce3097299f9c33b1"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.789696 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" event={"ID":"0b597927-2943-4e1a-bac5-1266d539e8f8","Type":"ContainerStarted","Data":"9909c443f73f0529408e05055bf9cbd5ac2d26461ece1c2a09e1cb5216a0b581"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.790207 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.793692 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" event={"ID":"0fb104b8-53b8-45dd-8406-206d6ba5a250","Type":"ContainerStarted","Data":"9d536b5002fb4c5739cdec4594a0130f7ca05a5e01a90ec55afb667f0d115aee"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.795718 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" event={"ID":"992c2b96-5783-4865-a47d-167caf91e241","Type":"ContainerStarted","Data":"92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.803991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" event={"ID":"0bef80e9-27d1-43c4-9a1f-4a86b2effe23","Type":"ContainerStarted","Data":"76aa7562aa54b7cf851bdfa539174e1a5d61390b4d0163ac290903646d675bd6"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.806713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ptmkd" event={"ID":"ccaee1bd-fef5-4874-9e96-002a733fd5dc","Type":"ContainerStarted","Data":"16f76cd6bf05f6fb4f402ecc35e901805472a099619bf8e10a27be6e93584f89"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.812710 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.828604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4hhbx"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.833390 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.833449 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.833795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: E0202 14:36:07.834157 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.334141267 +0000 UTC m=+169.978778037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.834943 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835096 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835887 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsgdg\" (UniqueName: \"kubernetes.io/projected/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-kube-api-access-lsgdg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.838784 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.839132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.847492 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.868112 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.930079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pm4x8"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.935734 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.940760 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941105 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a72caff3-6c15-4b44-9821-ed7b30a13b58-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnzwd\" (UniqueName: \"kubernetes.io/projected/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-kube-api-access-mnzwd\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bgr\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-kube-api-access-z5bgr\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941248 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b506ef-4fcb-4bdc-bf47-f875c04441c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941267 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941304 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9f98e83-4853-4d43-bf81-09795442acc8-metrics-tls\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941395 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-mountpoint-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-cabundle\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkhc\" (UniqueName: \"kubernetes.io/projected/e1a1dc5f-b886-4775-a090-0fe774fb23ed-kube-api-access-gjkhc\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941480 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-certs\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/debcc43e-e06f-486a-af8c-6a9d4d553913-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-profile-collector-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d00dceb-f9c4-4c49-a631-ea69008c387a-metrics-tls\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941579 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31732c2e-e945-4fb4-b471-175489c076c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941616 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b506ef-4fcb-4bdc-bf47-f875c04441c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941656 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jclxx\" (UniqueName: \"kubernetes.io/projected/cc58cc97-069b-4691-88ed-cc2788096a6e-kube-api-access-jclxx\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941682 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsgdg\" (UniqueName: \"kubernetes.io/projected/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-kube-api-access-lsgdg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941720 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2cef1c-ff45-4005-8550-4d87d4601dbd-serving-cert\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941739 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-config\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a76e81a-7f92-4baf-9604-1e1c011da3a0-tmpfs\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941774 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31732c2e-e945-4fb4-b471-175489c076c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tdqc\" (UniqueName: \"kubernetes.io/projected/0e414f83-c91b-4997-8cb3-3e200f62e45a-kube-api-access-9tdqc\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-config\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-srv-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941953 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72v6\" (UniqueName: \"kubernetes.io/projected/3d2cef1c-ff45-4005-8550-4d87d4601dbd-kube-api-access-q72v6\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f1c29-72b6-4768-8245-c5db262d052a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941991 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-config\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942010 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4fgx\" (UniqueName: \"kubernetes.io/projected/6ea4b230-5ebc-4712-88e0-ce48acfc4785-kube-api-access-w4fgx\") pod \"migrator-59844c95c7-7kwts\" (UID: \"6ea4b230-5ebc-4712-88e0-ce48acfc4785\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942029 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31732c2e-e945-4fb4-b471-175489c076c4-config\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942048 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942068 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f89cdf2d-50e4-4089-8345-f11f7791826d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrcc\" (UniqueName: \"kubernetes.io/projected/f75d2e36-7785-4a76-8dfb-55227d418d19-kube-api-access-mwrcc\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9q8t\" (UniqueName: \"kubernetes.io/projected/7c9fade4-43f8-4b81-90de-876b5fac7b4c-kube-api-access-k9q8t\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdrt\" (UniqueName: \"kubernetes.io/projected/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-kube-api-access-hzdrt\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942156 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d00dceb-f9c4-4c49-a631-ea69008c387a-trusted-ca\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942174 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e73f227e-ad7c-4212-abd9-e844916c0a17-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942204 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78dr\" (UniqueName: \"kubernetes.io/projected/a549ee44-8319-4980-ac57-9f0c8f169784-kube-api-access-h78dr\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942232 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942253 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a1dc5f-b886-4775-a090-0fe774fb23ed-config\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942269 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-plugins-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4wcc\" (UniqueName: \"kubernetes.io/projected/bedd3f8b-6013-48a0-a84e-5c9760146d70-kube-api-access-h4wcc\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942301 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debcc43e-e06f-486a-af8c-6a9d4d553913-serving-cert\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-key\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942389 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-csi-data-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxt4w\" (UniqueName: \"kubernetes.io/projected/f89cdf2d-50e4-4089-8345-f11f7791826d-kube-api-access-lxt4w\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942426 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-trusted-ca\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942443 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942462 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-default-certificate\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.981119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: E0202 14:36:07.986132 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.486086429 +0000 UTC m=+170.130723199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.988536 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a72caff3-6c15-4b44-9821-ed7b30a13b58-proxy-tls\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.992759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.992842 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ccjx\" (UniqueName: \"kubernetes.io/projected/f9f98e83-4853-4d43-bf81-09795442acc8-kube-api-access-2ccjx\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.993083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktjpr\" (UniqueName: \"kubernetes.io/projected/2f135077-03c5-46c5-a9c0-603837453e1c-kube-api-access-ktjpr\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.993225 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-metrics-certs\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: E0202 14:36:07.996670 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.496643369 +0000 UTC m=+170.141280139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.996935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt9sd\" (UniqueName: \"kubernetes.io/projected/f62540d0-1acd-4266-9738-f0fdc72f47d0-kube-api-access-rt9sd\") pod \"downloads-7954f5f757-zqdwm\" (UID: \"f62540d0-1acd-4266-9738-f0fdc72f47d0\") " pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.996991 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jfkh\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-kube-api-access-6jfkh\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997015 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997036 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e414f83-c91b-4997-8cb3-3e200f62e45a-cert\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997077 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b506ef-4fcb-4bdc-bf47-f875c04441c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzxg\" (UniqueName: \"kubernetes.io/projected/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-kube-api-access-ttzxg\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997184 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-node-bootstrap-token\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqj8z\" (UniqueName: \"kubernetes.io/projected/8a76e81a-7f92-4baf-9604-1e1c011da3a0-kube-api-access-rqj8z\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzqw\" (UniqueName: \"kubernetes.io/projected/ca2f1c29-72b6-4768-8245-c5db262d052a-kube-api-access-4rzqw\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997939 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e73f227e-ad7c-4212-abd9-e844916c0a17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997979 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.998001 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f135077-03c5-46c5-a9c0-603837453e1c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.998327 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wspcl\" (UniqueName: \"kubernetes.io/projected/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-kube-api-access-wspcl\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999069 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-stats-auth\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr246\" (UniqueName: \"kubernetes.io/projected/debcc43e-e06f-486a-af8c-6a9d4d553913-kube-api-access-mr246\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999147 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999166 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999190 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-images\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999218 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hmc\" (UniqueName: \"kubernetes.io/projected/a72caff3-6c15-4b44-9821-ed7b30a13b58-kube-api-access-82hmc\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999246 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-images\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999263 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25w8v\" (UniqueName: \"kubernetes.io/projected/0ade6e3e-6274-4469-af6f-39455fd721db-kube-api-access-25w8v\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a1dc5f-b886-4775-a090-0fe774fb23ed-serving-cert\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999330 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999352 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-socket-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f75d2e36-7785-4a76-8dfb-55227d418d19-proxy-tls\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999412 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f135077-03c5-46c5-a9c0-603837453e1c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-srv-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999491 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a549ee44-8319-4980-ac57-9f0c8f169784-service-ca-bundle\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999569 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ade6e3e-6274-4469-af6f-39455fd721db-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-config\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999714 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-registration-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999757 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9f98e83-4853-4d43-bf81-09795442acc8-config-volume\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-service-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999805 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-client\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:07.999843 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-serving-cert\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.001533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.001962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.004139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.006493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.009361 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.020690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsgdg\" (UniqueName: \"kubernetes.io/projected/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-kube-api-access-lsgdg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.026628 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.027251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.028117 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg"] Feb 02 14:36:08 crc kubenswrapper[4869]: W0202 14:36:08.041011 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9922f280_ff61_424a_a336_769c0cfb5da2.slice/crio-30f998d369401c48a9cb14c97ff2199f0c0ff3877f27412682cd41fab6cb73d0 WatchSource:0}: Error finding container 30f998d369401c48a9cb14c97ff2199f0c0ff3877f27412682cd41fab6cb73d0: Status 404 returned error can't find the container with id 30f998d369401c48a9cb14c97ff2199f0c0ff3877f27412682cd41fab6cb73d0 Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.049647 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67"] Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.053542 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6"] Feb 02 14:36:08 crc kubenswrapper[4869]: W0202 14:36:08.098084 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaf3c5a5_da3e_43dc_b8dc_a02b3fd32804.slice/crio-05c541b5fb87668031fdd72e896a3bc99c1d87cc9d223ad7767b25528bc3b5db WatchSource:0}: Error finding container 05c541b5fb87668031fdd72e896a3bc99c1d87cc9d223ad7767b25528bc3b5db: Status 404 returned error can't find the container with id 05c541b5fb87668031fdd72e896a3bc99c1d87cc9d223ad7767b25528bc3b5db Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.098316 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.100569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.100833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31732c2e-e945-4fb4-b471-175489c076c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.100876 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tdqc\" (UniqueName: \"kubernetes.io/projected/0e414f83-c91b-4997-8cb3-3e200f62e45a-kube-api-access-9tdqc\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.100933 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.600887343 +0000 UTC m=+170.245524113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.100985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72v6\" (UniqueName: \"kubernetes.io/projected/3d2cef1c-ff45-4005-8550-4d87d4601dbd-kube-api-access-q72v6\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101122 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-config\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101151 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-srv-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101194 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-config\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101214 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4fgx\" (UniqueName: \"kubernetes.io/projected/6ea4b230-5ebc-4712-88e0-ce48acfc4785-kube-api-access-w4fgx\") pod \"migrator-59844c95c7-7kwts\" (UID: \"6ea4b230-5ebc-4712-88e0-ce48acfc4785\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31732c2e-e945-4fb4-b471-175489c076c4-config\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f1c29-72b6-4768-8245-c5db262d052a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f89cdf2d-50e4-4089-8345-f11f7791826d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrcc\" (UniqueName: \"kubernetes.io/projected/f75d2e36-7785-4a76-8dfb-55227d418d19-kube-api-access-mwrcc\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101378 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9q8t\" (UniqueName: \"kubernetes.io/projected/7c9fade4-43f8-4b81-90de-876b5fac7b4c-kube-api-access-k9q8t\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101397 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdrt\" (UniqueName: \"kubernetes.io/projected/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-kube-api-access-hzdrt\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e73f227e-ad7c-4212-abd9-e844916c0a17-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d00dceb-f9c4-4c49-a631-ea69008c387a-trusted-ca\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101505 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78dr\" (UniqueName: \"kubernetes.io/projected/a549ee44-8319-4980-ac57-9f0c8f169784-kube-api-access-h78dr\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101531 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-plugins-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4wcc\" (UniqueName: \"kubernetes.io/projected/bedd3f8b-6013-48a0-a84e-5c9760146d70-kube-api-access-h4wcc\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101568 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a1dc5f-b886-4775-a090-0fe774fb23ed-config\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101588 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-key\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101607 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debcc43e-e06f-486a-af8c-6a9d4d553913-serving-cert\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101630 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-csi-data-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101648 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxt4w\" (UniqueName: \"kubernetes.io/projected/f89cdf2d-50e4-4089-8345-f11f7791826d-kube-api-access-lxt4w\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101696 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-trusted-ca\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101718 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a72caff3-6c15-4b44-9821-ed7b30a13b58-proxy-tls\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-default-certificate\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ccjx\" (UniqueName: \"kubernetes.io/projected/f9f98e83-4853-4d43-bf81-09795442acc8-kube-api-access-2ccjx\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101854 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt9sd\" (UniqueName: \"kubernetes.io/projected/f62540d0-1acd-4266-9738-f0fdc72f47d0-kube-api-access-rt9sd\") pod \"downloads-7954f5f757-zqdwm\" (UID: \"f62540d0-1acd-4266-9738-f0fdc72f47d0\") " pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101877 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktjpr\" (UniqueName: \"kubernetes.io/projected/2f135077-03c5-46c5-a9c0-603837453e1c-kube-api-access-ktjpr\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101896 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-metrics-certs\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101967 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jfkh\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-kube-api-access-6jfkh\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e414f83-c91b-4997-8cb3-3e200f62e45a-cert\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzxg\" (UniqueName: \"kubernetes.io/projected/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-kube-api-access-ttzxg\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b506ef-4fcb-4bdc-bf47-f875c04441c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-node-bootstrap-token\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqj8z\" (UniqueName: \"kubernetes.io/projected/8a76e81a-7f92-4baf-9604-1e1c011da3a0-kube-api-access-rqj8z\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102119 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e73f227e-ad7c-4212-abd9-e844916c0a17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102144 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzqw\" (UniqueName: \"kubernetes.io/projected/ca2f1c29-72b6-4768-8245-c5db262d052a-kube-api-access-4rzqw\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102221 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f135077-03c5-46c5-a9c0-603837453e1c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wspcl\" (UniqueName: \"kubernetes.io/projected/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-kube-api-access-wspcl\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-stats-auth\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr246\" (UniqueName: \"kubernetes.io/projected/debcc43e-e06f-486a-af8c-6a9d4d553913-kube-api-access-mr246\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-images\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102357 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hmc\" (UniqueName: \"kubernetes.io/projected/a72caff3-6c15-4b44-9821-ed7b30a13b58-kube-api-access-82hmc\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-images\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25w8v\" (UniqueName: \"kubernetes.io/projected/0ade6e3e-6274-4469-af6f-39455fd721db-kube-api-access-25w8v\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102463 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102481 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a1dc5f-b886-4775-a090-0fe774fb23ed-serving-cert\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31732c2e-e945-4fb4-b471-175489c076c4-config\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102516 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f75d2e36-7785-4a76-8dfb-55227d418d19-proxy-tls\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-socket-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f135077-03c5-46c5-a9c0-603837453e1c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-srv-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102631 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a549ee44-8319-4980-ac57-9f0c8f169784-service-ca-bundle\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ade6e3e-6274-4469-af6f-39455fd721db-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102701 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-config\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102745 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-registration-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9f98e83-4853-4d43-bf81-09795442acc8-config-volume\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102794 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-service-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-client\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-serving-cert\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a72caff3-6c15-4b44-9821-ed7b30a13b58-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102872 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnzwd\" (UniqueName: \"kubernetes.io/projected/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-kube-api-access-mnzwd\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5bgr\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-kube-api-access-z5bgr\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102950 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b506ef-4fcb-4bdc-bf47-f875c04441c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102967 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9f98e83-4853-4d43-bf81-09795442acc8-metrics-tls\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103053 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-mountpoint-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103070 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-cabundle\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjkhc\" (UniqueName: \"kubernetes.io/projected/e1a1dc5f-b886-4775-a090-0fe774fb23ed-kube-api-access-gjkhc\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103116 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-certs\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103153 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/debcc43e-e06f-486a-af8c-6a9d4d553913-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d00dceb-f9c4-4c49-a631-ea69008c387a-metrics-tls\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31732c2e-e945-4fb4-b471-175489c076c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103203 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-profile-collector-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103226 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103250 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103280 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b506ef-4fcb-4bdc-bf47-f875c04441c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jclxx\" (UniqueName: \"kubernetes.io/projected/cc58cc97-069b-4691-88ed-cc2788096a6e-kube-api-access-jclxx\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103321 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103341 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2cef1c-ff45-4005-8550-4d87d4601dbd-serving-cert\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-config\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a76e81a-7f92-4baf-9604-1e1c011da3a0-tmpfs\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.104157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-csi-data-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.105283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-socket-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.105647 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31732c2e-e945-4fb4-b471-175489c076c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.105893 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a76e81a-7f92-4baf-9604-1e1c011da3a0-tmpfs\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.106435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-srv-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-config\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.107752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.108263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-images\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113651 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a1dc5f-b886-4775-a090-0fe774fb23ed-serving-cert\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.110871 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113682 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a1dc5f-b886-4775-a090-0fe774fb23ed-config\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113702 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f75d2e36-7785-4a76-8dfb-55227d418d19-proxy-tls\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.111588 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-plugins-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.112081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-trusted-ca\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.112291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-default-certificate\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.112673 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.612652373 +0000 UTC m=+170.257289143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.112758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-config\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f135077-03c5-46c5-a9c0-603837453e1c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.111405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f1c29-72b6-4768-8245-c5db262d052a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.111467 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-images\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f89cdf2d-50e4-4089-8345-f11f7791826d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113248 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e73f227e-ad7c-4212-abd9-e844916c0a17-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.114568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.114749 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e414f83-c91b-4997-8cb3-3e200f62e45a-cert\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.115274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a72caff3-6c15-4b44-9821-ed7b30a13b58-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.115369 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a72caff3-6c15-4b44-9821-ed7b30a13b58-proxy-tls\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.115402 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.115734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9f98e83-4853-4d43-bf81-09795442acc8-config-volume\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.116043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-config\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.116425 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.109287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-metrics-certs\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.116764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-service-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.117051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.117365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a549ee44-8319-4980-ac57-9f0c8f169784-service-ca-bundle\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.117606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-registration-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.117653 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-mountpoint-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b506ef-4fcb-4bdc-bf47-f875c04441c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-srv-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d00dceb-f9c4-4c49-a631-ea69008c387a-trusted-ca\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118822 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/debcc43e-e06f-486a-af8c-6a9d4d553913-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-config\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.119318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e73f227e-ad7c-4212-abd9-e844916c0a17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.119993 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.120257 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-cabundle\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.120952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.121585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.122646 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ade6e3e-6274-4469-af6f-39455fd721db-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.123201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9f98e83-4853-4d43-bf81-09795442acc8-metrics-tls\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.123697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.123740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.124219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f135077-03c5-46c5-a9c0-603837453e1c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.123750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-node-bootstrap-token\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.124860 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-stats-auth\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.125477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-certs\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.125548 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-client\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.126755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-key\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.128709 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.128736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b506ef-4fcb-4bdc-bf47-f875c04441c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.129951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debcc43e-e06f-486a-af8c-6a9d4d553913-serving-cert\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.130555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-serving-cert\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.130555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d00dceb-f9c4-4c49-a631-ea69008c387a-metrics-tls\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.130617 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2cef1c-ff45-4005-8550-4d87d4601dbd-serving-cert\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.130728 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.131161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-profile-collector-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.144726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tdqc\" (UniqueName: \"kubernetes.io/projected/0e414f83-c91b-4997-8cb3-3e200f62e45a-kube-api-access-9tdqc\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.171059 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72v6\" (UniqueName: \"kubernetes.io/projected/3d2cef1c-ff45-4005-8550-4d87d4601dbd-kube-api-access-q72v6\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.188255 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9q8t\" (UniqueName: \"kubernetes.io/projected/7c9fade4-43f8-4b81-90de-876b5fac7b4c-kube-api-access-k9q8t\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.204077 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.204266 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.704230595 +0000 UTC m=+170.348867365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.204487 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4fgx\" (UniqueName: \"kubernetes.io/projected/6ea4b230-5ebc-4712-88e0-ce48acfc4785-kube-api-access-w4fgx\") pod \"migrator-59844c95c7-7kwts\" (UID: \"6ea4b230-5ebc-4712-88e0-ce48acfc4785\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.204803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.206532 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.70650466 +0000 UTC m=+170.351141420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.222345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.223965 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.246187 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzqw\" (UniqueName: \"kubernetes.io/projected/ca2f1c29-72b6-4768-8245-c5db262d052a-kube-api-access-4rzqw\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.255226 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.265729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jfkh\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-kube-api-access-6jfkh\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.285346 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrcc\" (UniqueName: \"kubernetes.io/projected/f75d2e36-7785-4a76-8dfb-55227d418d19-kube-api-access-mwrcc\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.305267 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdrt\" (UniqueName: \"kubernetes.io/projected/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-kube-api-access-hzdrt\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.306503 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.307135 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.806893439 +0000 UTC m=+170.451530209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.308508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.308700 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.310419 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.810400816 +0000 UTC m=+170.455037586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.319572 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.325421 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b506ef-4fcb-4bdc-bf47-f875c04441c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.325684 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:08 crc kubenswrapper[4869]: W0202 14:36:08.346268 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c9fade4_43f8_4b81_90de_876b5fac7b4c.slice/crio-9cec97abbb7bf422588b8e0d50f5b664457daedbf2502f2ef1dca09ffae879e2 WatchSource:0}: Error finding container 9cec97abbb7bf422588b8e0d50f5b664457daedbf2502f2ef1dca09ffae879e2: Status 404 returned error can't find the container with id 9cec97abbb7bf422588b8e0d50f5b664457daedbf2502f2ef1dca09ffae879e2 Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.370814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.370884 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzxg\" (UniqueName: \"kubernetes.io/projected/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-kube-api-access-ttzxg\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.397618 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hmc\" (UniqueName: \"kubernetes.io/projected/a72caff3-6c15-4b44-9821-ed7b30a13b58-kube-api-access-82hmc\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.399812 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6"] Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.409343 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.409586 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.909529233 +0000 UTC m=+170.554166003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.410200 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.411027 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.911007889 +0000 UTC m=+170.555644669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.412448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxt4w\" (UniqueName: \"kubernetes.io/projected/f89cdf2d-50e4-4089-8345-f11f7791826d-kube-api-access-lxt4w\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.417428 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.424433 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqj8z\" (UniqueName: \"kubernetes.io/projected/8a76e81a-7f92-4baf-9604-1e1c011da3a0-kube-api-access-rqj8z\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.426266 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: W0202 14:36:08.435589 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aacb2d9_48ca_4f95_9153_8f4338b4a16c.slice/crio-5dc9b6eb08b3b2a5162275e9458d4b896361027db1de0bd1e0e6e9052f46a00d WatchSource:0}: Error finding container 5dc9b6eb08b3b2a5162275e9458d4b896361027db1de0bd1e0e6e9052f46a00d: Status 404 returned error can't find the container with id 5dc9b6eb08b3b2a5162275e9458d4b896361027db1de0bd1e0e6e9052f46a00d Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.446754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78dr\" (UniqueName: \"kubernetes.io/projected/a549ee44-8319-4980-ac57-9f0c8f169784-kube-api-access-h78dr\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.450147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.461021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.471629 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.473391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25w8v\" (UniqueName: \"kubernetes.io/projected/0ade6e3e-6274-4469-af6f-39455fd721db-kube-api-access-25w8v\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.483273 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.484549 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4wcc\" (UniqueName: \"kubernetes.io/projected/bedd3f8b-6013-48a0-a84e-5c9760146d70-kube-api-access-h4wcc\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.494188 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z4jh5"] Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.503602 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.507624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ccjx\" (UniqueName: \"kubernetes.io/projected/f9f98e83-4853-4d43-bf81-09795442acc8-kube-api-access-2ccjx\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.512435 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.512691 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.012638089 +0000 UTC m=+170.657274859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.512942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.513434 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.013406868 +0000 UTC m=+170.658043638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.516489 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.526142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt9sd\" (UniqueName: \"kubernetes.io/projected/f62540d0-1acd-4266-9738-f0fdc72f47d0-kube-api-access-rt9sd\") pod \"downloads-7954f5f757-zqdwm\" (UID: \"f62540d0-1acd-4266-9738-f0fdc72f47d0\") " pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.549415 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.558536 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktjpr\" (UniqueName: \"kubernetes.io/projected/2f135077-03c5-46c5-a9c0-603837453e1c-kube-api-access-ktjpr\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.575269 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.590464 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.596264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31732c2e-e945-4fb4-b471-175489c076c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.614722 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.614874 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.114815732 +0000 UTC m=+170.759452492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.615258 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.615871 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.115853018 +0000 UTC m=+170.760489788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.618731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.644843 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.645981 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.653620 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.659880 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.664251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnzwd\" (UniqueName: \"kubernetes.io/projected/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-kube-api-access-mnzwd\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.669925 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.670003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wspcl\" (UniqueName: \"kubernetes.io/projected/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-kube-api-access-wspcl\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.703002 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5bgr\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-kube-api-access-z5bgr\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.705658 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.710334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.716491 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.716657 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.216629846 +0000 UTC m=+170.861266626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.716900 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.717364 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.217356373 +0000 UTC m=+170.861993133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.723748 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jclxx\" (UniqueName: \"kubernetes.io/projected/cc58cc97-069b-4691-88ed-cc2788096a6e-kube-api-access-jclxx\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.730119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.730198 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.740385 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.746955 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.777752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjkhc\" (UniqueName: \"kubernetes.io/projected/e1a1dc5f-b886-4775-a090-0fe774fb23ed-kube-api-access-gjkhc\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.783358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr246\" (UniqueName: \"kubernetes.io/projected/debcc43e-e06f-486a-af8c-6a9d4d553913-kube-api-access-mr246\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.792938 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.818002 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.819178 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.319147587 +0000 UTC m=+170.963784357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.840424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z4jh5" event={"ID":"0e414f83-c91b-4997-8cb3-3e200f62e45a","Type":"ContainerStarted","Data":"f68b18e01951bee20d5ad62beb1695c5dc733a1de35699be75bcfedbca173c7e"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.848251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" event={"ID":"6aacb2d9-48ca-4f95-9153-8f4338b4a16c","Type":"ContainerStarted","Data":"5dc9b6eb08b3b2a5162275e9458d4b896361027db1de0bd1e0e6e9052f46a00d"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.854345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ptmkd" event={"ID":"ccaee1bd-fef5-4874-9e96-002a733fd5dc","Type":"ContainerStarted","Data":"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.877589 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" event={"ID":"0bef80e9-27d1-43c4-9a1f-4a86b2effe23","Type":"ContainerStarted","Data":"bf26bdf8aee31f6fbbb4edaf16894afa8066e5e4ca4a25971d51c5e065ee63ff"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.877649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" event={"ID":"0bef80e9-27d1-43c4-9a1f-4a86b2effe23","Type":"ContainerStarted","Data":"24b9b1880ef9ed33fa8d9bb45282da2fb75bb55ecd62003d404271e536976623"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.884684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" event={"ID":"9922f280-ff61-424a-a336-769c0cfb5da2","Type":"ContainerStarted","Data":"2b51bcbb85d8472751355858ef6cc92f5966ef873355b4087900f8a831c03133"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.885474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" event={"ID":"9922f280-ff61-424a-a336-769c0cfb5da2","Type":"ContainerStarted","Data":"30f998d369401c48a9cb14c97ff2199f0c0ff3877f27412682cd41fab6cb73d0"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.890193 4869 generic.go:334] "Generic (PLEG): container finished" podID="aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804" containerID="3486b6e56d27275d69a67f88155309502e48009b2bc86d502be592fe3bea07bb" exitCode=0 Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.890294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" event={"ID":"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804","Type":"ContainerDied","Data":"3486b6e56d27275d69a67f88155309502e48009b2bc86d502be592fe3bea07bb"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.890333 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" event={"ID":"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804","Type":"ContainerStarted","Data":"05c541b5fb87668031fdd72e896a3bc99c1d87cc9d223ad7767b25528bc3b5db"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.893832 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" event={"ID":"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f","Type":"ContainerStarted","Data":"3b1b61802d93cd5c7c479af3b71a8d217bc71bbb0e14188d2aafd4662337373c"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.893892 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" event={"ID":"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f","Type":"ContainerStarted","Data":"d7f0c9fd23834a043f720eec366729ea6c97a4e56370e8110b05b1c34cecd5a8"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.903367 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" event={"ID":"992c2b96-5783-4865-a47d-167caf91e241","Type":"ContainerStarted","Data":"4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.903445 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.906847 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-245rt" event={"ID":"7c9fade4-43f8-4b81-90de-876b5fac7b4c","Type":"ContainerStarted","Data":"9cec97abbb7bf422588b8e0d50f5b664457daedbf2502f2ef1dca09ffae879e2"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.913376 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-snmjm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.913526 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.916273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" event={"ID":"0fb104b8-53b8-45dd-8406-206d6ba5a250","Type":"ContainerStarted","Data":"021299cc13546b3f383ba488e2cafe7486ef37ed6c0eca198fd06c72bf8210ed"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.918256 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-snfqj" event={"ID":"a549ee44-8319-4980-ac57-9f0c8f169784","Type":"ContainerStarted","Data":"ad5f248a948412d08a9279057eb39d56e5b75334a121a2e61b307630af16d2b8"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.920068 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.920620 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.420603301 +0000 UTC m=+171.065240071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.921162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" event={"ID":"aad51ba6-f20d-48b1-b456-c7309cc35bbd","Type":"ContainerStarted","Data":"35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.921224 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" event={"ID":"aad51ba6-f20d-48b1-b456-c7309cc35bbd","Type":"ContainerStarted","Data":"e0e031e07f3777bf084c57bd2ad11cca8d11083d95a8cbf49d91d2ce2ed3c4ce"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.921568 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.925714 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2zsv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.925768 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.926817 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" event={"ID":"dae3c559-c92e-45a1-8e66-383dee4460cd","Type":"ContainerStarted","Data":"475122a0b994fa79d5f3dd602b29797fe199c5e8506b565b2ad726b9bcc7d313"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.926871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" event={"ID":"dae3c559-c92e-45a1-8e66-383dee4460cd","Type":"ContainerStarted","Data":"259282048e007b5f2976df9faef40982b18fa21eaa64efe8568ad33302a63d2d"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.930121 4869 generic.go:334] "Generic (PLEG): container finished" podID="78130644-70b6-4285-9ca7-e5a671bd1111" containerID="4099c71b08581568ecd4efafcfae076d9ebd7bdba6d5418d35fcbab38fc6794f" exitCode=0 Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.931425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" event={"ID":"78130644-70b6-4285-9ca7-e5a671bd1111","Type":"ContainerDied","Data":"4099c71b08581568ecd4efafcfae076d9ebd7bdba6d5418d35fcbab38fc6794f"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.931475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" event={"ID":"78130644-70b6-4285-9ca7-e5a671bd1111","Type":"ContainerStarted","Data":"e200de564724006535d5b993c357e3923e2157ea97fa6f6141e1672dfbaf45b4"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.931576 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.979193 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.980171 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.987937 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.006832 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.022733 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.026769 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.526722231 +0000 UTC m=+171.171359001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.067831 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" podStartSLOduration=147.067805715 podStartE2EDuration="2m27.067805715s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.063201222 +0000 UTC m=+170.707837992" watchObservedRunningTime="2026-02-02 14:36:09.067805715 +0000 UTC m=+170.712442485" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.070372 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxvvv"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.126897 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.127606 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.627577901 +0000 UTC m=+171.272214671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.181035 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" podStartSLOduration=147.18100967 podStartE2EDuration="2m27.18100967s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.180480708 +0000 UTC m=+170.825117478" watchObservedRunningTime="2026-02-02 14:36:09.18100967 +0000 UTC m=+170.825646440" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.234507 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.234732 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.734701557 +0000 UTC m=+171.379338327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.235710 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.236361 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.736322906 +0000 UTC m=+171.380959676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.266429 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ptmkd" podStartSLOduration=147.266404969 podStartE2EDuration="2m27.266404969s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.262737139 +0000 UTC m=+170.907373939" watchObservedRunningTime="2026-02-02 14:36:09.266404969 +0000 UTC m=+170.911041739" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.337176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.337640 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.837587296 +0000 UTC m=+171.482224066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.440059 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.440511 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.940494397 +0000 UTC m=+171.585131177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.542247 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.542785 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.042763642 +0000 UTC m=+171.687400412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.562126 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.625997 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" podStartSLOduration=147.625964186 podStartE2EDuration="2m27.625964186s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.619095206 +0000 UTC m=+171.263731976" watchObservedRunningTime="2026-02-02 14:36:09.625964186 +0000 UTC m=+171.270600956" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.647283 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.648269 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.148251186 +0000 UTC m=+171.792887966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.710878 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.714199 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.727011 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.748373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.748829 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.248797079 +0000 UTC m=+171.893433849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: W0202 14:36:09.757478 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5daf4eab_ca30_4ea4_9eb0_6cc5f06877df.slice/crio-78a320908538974d04a95155c353079e9c6ffd43086bfc3504554a43475e51d7 WatchSource:0}: Error finding container 78a320908538974d04a95155c353079e9c6ffd43086bfc3504554a43475e51d7: Status 404 returned error can't find the container with id 78a320908538974d04a95155c353079e9c6ffd43086bfc3504554a43475e51d7 Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.786165 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qx2qt" podStartSLOduration=147.786136851 podStartE2EDuration="2m27.786136851s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.784792688 +0000 UTC m=+171.429429478" watchObservedRunningTime="2026-02-02 14:36:09.786136851 +0000 UTC m=+171.430773621" Feb 02 14:36:09 crc kubenswrapper[4869]: W0202 14:36:09.815154 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75d2e36_7785_4a76_8dfb_55227d418d19.slice/crio-5f53b7ae560a1defb1250afbd9da6e468fd2a610f7851883f37a71ed4b1c2d8c WatchSource:0}: Error finding container 5f53b7ae560a1defb1250afbd9da6e468fd2a610f7851883f37a71ed4b1c2d8c: Status 404 returned error can't find the container with id 5f53b7ae560a1defb1250afbd9da6e468fd2a610f7851883f37a71ed4b1c2d8c Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.851492 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.851960 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.351942786 +0000 UTC m=+171.996579556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.898252 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" podStartSLOduration=147.898216888 podStartE2EDuration="2m27.898216888s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.870428502 +0000 UTC m=+171.515065262" watchObservedRunningTime="2026-02-02 14:36:09.898216888 +0000 UTC m=+171.542853658" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.952786 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.954057 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.454033836 +0000 UTC m=+172.098670596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.969022 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-snfqj" event={"ID":"a549ee44-8319-4980-ac57-9f0c8f169784","Type":"ContainerStarted","Data":"35d87baf44583a98f4382cfd19d7f9ed312b1d2fff154a551bb87f7bdb8e09be"} Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.995277 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z4jh5" event={"ID":"0e414f83-c91b-4997-8cb3-3e200f62e45a","Type":"ContainerStarted","Data":"c8f98d231007cd54932c402080e605ab6217a19ab274c06493e5ae9aee3283e8"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.018953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" event={"ID":"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f","Type":"ContainerStarted","Data":"f14c3d76b43ca6897f530064913e262d5e368ee2078f33c8b96634f28866bf0e"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.063973 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.069132 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.569110998 +0000 UTC m=+172.213747768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.079933 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-245rt" event={"ID":"7c9fade4-43f8-4b81-90de-876b5fac7b4c","Type":"ContainerStarted","Data":"83c30a5bae358f2d5eeaef9e90bacc6e4d4e85b599e85292ed9599dae6e574f4"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.086063 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.090486 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.092801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" event={"ID":"0fb104b8-53b8-45dd-8406-206d6ba5a250","Type":"ContainerStarted","Data":"b634570d4b57aa186c6ad6fde832ed5506971ec301154cef1fa3228b98685ea1"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.095607 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" event={"ID":"6aacb2d9-48ca-4f95-9153-8f4338b4a16c","Type":"ContainerStarted","Data":"f60b75498b78b8e1d9cc016298eb48c46aa35a328a6f9623b5ed8f151ff061f4"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.100827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" event={"ID":"6ea4b230-5ebc-4712-88e0-ce48acfc4785","Type":"ContainerStarted","Data":"12224d7b4868f3fbdaa05a1f8ea9b38f4b88f351d1b341503889cdc6f1e2b977"} Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.108813 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66b506ef_4fcb_4bdc_bf47_f875c04441c0.slice/crio-b1afad876e6ed59371949ca690c9be076c296157567deb4db55b6d1b5af60fab WatchSource:0}: Error finding container b1afad876e6ed59371949ca690c9be076c296157567deb4db55b6d1b5af60fab: Status 404 returned error can't find the container with id b1afad876e6ed59371949ca690c9be076c296157567deb4db55b6d1b5af60fab Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.131661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" event={"ID":"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804","Type":"ContainerStarted","Data":"de17559b548103ed4151d27f45d7c40673f6fe65a49238ba71e888fd2ea0d5f7"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.143570 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.161605 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.166040 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.166467 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.66643855 +0000 UTC m=+172.311075320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.167809 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" event={"ID":"8a76e81a-7f92-4baf-9604-1e1c011da3a0","Type":"ContainerStarted","Data":"99fe4a5be62a6ad1018e541152f4fb564364b1354bc8972cc30de89c4885e368"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.172223 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.173505 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.673478614 +0000 UTC m=+172.318115544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.182366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" event={"ID":"3d2cef1c-ff45-4005-8550-4d87d4601dbd","Type":"ContainerStarted","Data":"3df94f7612a3b09263565ce4d388a5ed4804818685a2c16751a5f4a9aeb282a1"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.182420 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" event={"ID":"3d2cef1c-ff45-4005-8550-4d87d4601dbd","Type":"ContainerStarted","Data":"7ac04d0c3040d8996b230ec821decd6f44849e71608bd7abdb92a90e32cd2c53"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.183484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.187156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" event={"ID":"f75d2e36-7785-4a76-8dfb-55227d418d19","Type":"ContainerStarted","Data":"5f53b7ae560a1defb1250afbd9da6e468fd2a610f7851883f37a71ed4b1c2d8c"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.191620 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdq4v"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.193993 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-dxvvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.194056 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" podUID="3d2cef1c-ff45-4005-8550-4d87d4601dbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.202935 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q"] Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.215134 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f135077_03c5_46c5_a9c0_603837453e1c.slice/crio-83a3f10b8c2c997ac5b03bd38677c33a94bfd3925c3c2498b6d74e438d3ad967 WatchSource:0}: Error finding container 83a3f10b8c2c997ac5b03bd38677c33a94bfd3925c3c2498b6d74e438d3ad967: Status 404 returned error can't find the container with id 83a3f10b8c2c997ac5b03bd38677c33a94bfd3925c3c2498b6d74e438d3ad967 Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.231145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.240808 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zqdwm"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.263784 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podStartSLOduration=148.263734513 podStartE2EDuration="2m28.263734513s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.161257982 +0000 UTC m=+171.805894752" watchObservedRunningTime="2026-02-02 14:36:10.263734513 +0000 UTC m=+171.908371293" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.273269 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.273872 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.773840692 +0000 UTC m=+172.418477452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.275986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.276629 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.77660225 +0000 UTC m=+172.421239230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.284160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" event={"ID":"78130644-70b6-4285-9ca7-e5a671bd1111","Type":"ContainerStarted","Data":"b9283d0cd7c9c1a92ff238d01fa62272096457af7dad6776f1218dfdbaa71354"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.313559 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" event={"ID":"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df","Type":"ContainerStarted","Data":"78a320908538974d04a95155c353079e9c6ffd43086bfc3504554a43475e51d7"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.314021 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mcwnk"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.314482 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2zsv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.314557 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.314889 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.315350 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cvd9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.315398 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" podUID="5daf4eab-ca30-4ea4-9eb0-6cc5f06877df" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.378234 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.379820 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.879747096 +0000 UTC m=+172.524383876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.451044 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m44c2"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.484793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.486645 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.986620496 +0000 UTC m=+172.631257266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.505340 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.535699 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.535745 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.541524 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.546496 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.550058 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.558575 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.564162 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whptb"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.564188 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.565628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.577522 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.578754 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8vv5"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.585658 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.585992 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.085975539 +0000 UTC m=+172.730612309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.588357 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs"] Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.591171 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d2d2e9_b85f_46b8_b768_a59ebd9fd423.slice/crio-ce7f63badb24e05223ca294598e40ae82df21b8c3cf6df677c6caaf0d0c37ba9 WatchSource:0}: Error finding container ce7f63badb24e05223ca294598e40ae82df21b8c3cf6df677c6caaf0d0c37ba9: Status 404 returned error can't find the container with id ce7f63badb24e05223ca294598e40ae82df21b8c3cf6df677c6caaf0d0c37ba9 Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.593081 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.599500 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9cvf"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.611870 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" podStartSLOduration=148.611854467 podStartE2EDuration="2m28.611854467s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.609469909 +0000 UTC m=+172.254106679" watchObservedRunningTime="2026-02-02 14:36:10.611854467 +0000 UTC m=+172.256491237" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.629348 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" podStartSLOduration=148.629324599 podStartE2EDuration="2m28.629324599s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.626580711 +0000 UTC m=+172.271217491" watchObservedRunningTime="2026-02-02 14:36:10.629324599 +0000 UTC m=+172.273961369" Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.677404 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddebcc43e_e06f_486a_af8c_6a9d4d553913.slice/crio-64ff9adab44c0f1a58c22d3d6e6e049a5e43dadea0c49572833ddf02862aad21 WatchSource:0}: Error finding container 64ff9adab44c0f1a58c22d3d6e6e049a5e43dadea0c49572833ddf02862aad21: Status 404 returned error can't find the container with id 64ff9adab44c0f1a58c22d3d6e6e049a5e43dadea0c49572833ddf02862aad21 Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.678015 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" podStartSLOduration=147.6779875 podStartE2EDuration="2m27.6779875s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.677799265 +0000 UTC m=+172.322436035" watchObservedRunningTime="2026-02-02 14:36:10.6779875 +0000 UTC m=+172.322624280" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.687766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.693502 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.193461473 +0000 UTC m=+172.838098243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.702278 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d00dceb_f9c4_4c49_a631_ea69008c387a.slice/crio-104326e0965902be6346d974daaf4f58a782a44ad7786cf764b87875e2400306 WatchSource:0}: Error finding container 104326e0965902be6346d974daaf4f58a782a44ad7786cf764b87875e2400306: Status 404 returned error can't find the container with id 104326e0965902be6346d974daaf4f58a782a44ad7786cf764b87875e2400306 Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.742349 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" podStartSLOduration=148.742330908 podStartE2EDuration="2m28.742330908s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.710260797 +0000 UTC m=+172.354897567" watchObservedRunningTime="2026-02-02 14:36:10.742330908 +0000 UTC m=+172.386967668" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.748411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.760144 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc40fc5ef_7c09_46e1_808d_f388cba3a5e3.slice/crio-f43a6ee746ecf98b90ca8804c537a481fa80af061053ea49d9e80bd5ef5ee618 WatchSource:0}: Error finding container f43a6ee746ecf98b90ca8804c537a481fa80af061053ea49d9e80bd5ef5ee618: Status 404 returned error can't find the container with id f43a6ee746ecf98b90ca8804c537a481fa80af061053ea49d9e80bd5ef5ee618 Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.760392 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ade6e3e_6274_4469_af6f_39455fd721db.slice/crio-035a86284c8086780e24e2b4c98eb0f1dc2aacdb070239b9b9d5b1fe1ab8996c WatchSource:0}: Error finding container 035a86284c8086780e24e2b4c98eb0f1dc2aacdb070239b9b9d5b1fe1ab8996c: Status 404 returned error can't find the container with id 035a86284c8086780e24e2b4c98eb0f1dc2aacdb070239b9b9d5b1fe1ab8996c Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.790188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.790642 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-245rt" podStartSLOduration=5.790621611 podStartE2EDuration="5.790621611s" podCreationTimestamp="2026-02-02 14:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.742709188 +0000 UTC m=+172.387345958" watchObservedRunningTime="2026-02-02 14:36:10.790621611 +0000 UTC m=+172.435258381" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.791084 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.291057831 +0000 UTC m=+172.935694601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.791326 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z4jh5" podStartSLOduration=5.791322718 podStartE2EDuration="5.791322718s" podCreationTimestamp="2026-02-02 14:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.78773226 +0000 UTC m=+172.432369030" watchObservedRunningTime="2026-02-02 14:36:10.791322718 +0000 UTC m=+172.435959488" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.889082 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-snfqj" podStartSLOduration=148.889060841 podStartE2EDuration="2m28.889060841s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.82985856 +0000 UTC m=+172.474495340" watchObservedRunningTime="2026-02-02 14:36:10.889060841 +0000 UTC m=+172.533697611" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.889444 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" podStartSLOduration=147.889434391 podStartE2EDuration="2m27.889434391s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.886896918 +0000 UTC m=+172.531533688" watchObservedRunningTime="2026-02-02 14:36:10.889434391 +0000 UTC m=+172.534071161" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.893563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.893926 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.393898041 +0000 UTC m=+173.038534811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.922273 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" podStartSLOduration=148.922239521 podStartE2EDuration="2m28.922239521s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.921392999 +0000 UTC m=+172.566029769" watchObservedRunningTime="2026-02-02 14:36:10.922239521 +0000 UTC m=+172.566876311" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.998457 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.999080 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.499055517 +0000 UTC m=+173.143692287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.080997 4869 csr.go:261] certificate signing request csr-4j5dp is approved, waiting to be issued Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.093260 4869 csr.go:257] certificate signing request csr-4j5dp is issued Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.101103 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.101525 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.601508897 +0000 UTC m=+173.246145667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.213414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.214550 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.714507837 +0000 UTC m=+173.359144627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.214679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.215204 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.715194663 +0000 UTC m=+173.359831433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.315295 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.315708 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.815663954 +0000 UTC m=+173.460300844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.403969 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" event={"ID":"ca2f1c29-72b6-4768-8245-c5db262d052a","Type":"ContainerStarted","Data":"f037ecfa342889bc8e77c537f34c53f1db93c81d926d77561e653a5b5f5edc53"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.404037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" event={"ID":"ca2f1c29-72b6-4768-8245-c5db262d052a","Type":"ContainerStarted","Data":"9c12ce938552da76d5c5f3887e84a70163c30f4a298a458bc8fa949bcb0c1eb9"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.410725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" event={"ID":"2f135077-03c5-46c5-a9c0-603837453e1c","Type":"ContainerStarted","Data":"ade710e51ea34e3e3b68afb62334d5ffcdaf25851bce2e4cec3c13311c984917"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.410785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" event={"ID":"2f135077-03c5-46c5-a9c0-603837453e1c","Type":"ContainerStarted","Data":"83a3f10b8c2c997ac5b03bd38677c33a94bfd3925c3c2498b6d74e438d3ad967"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.416858 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.417347 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.917333014 +0000 UTC m=+173.561969784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.422352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" event={"ID":"c40fc5ef-7c09-46e1-808d-f388cba3a5e3","Type":"ContainerStarted","Data":"f43a6ee746ecf98b90ca8804c537a481fa80af061053ea49d9e80bd5ef5ee618"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.463313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" event={"ID":"a72caff3-6c15-4b44-9821-ed7b30a13b58","Type":"ContainerStarted","Data":"ae63f9c0d62409fa2fe4bd2555bab62088ba66bba2674df5a3b0c4a41613c2f8"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.463398 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" event={"ID":"a72caff3-6c15-4b44-9821-ed7b30a13b58","Type":"ContainerStarted","Data":"6c1b344c606bf0165834920bf58b76dc064bc2dcc3268e6992270dc8daca2c86"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.497153 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" event={"ID":"debcc43e-e06f-486a-af8c-6a9d4d553913","Type":"ContainerStarted","Data":"64ff9adab44c0f1a58c22d3d6e6e049a5e43dadea0c49572833ddf02862aad21"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.499237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" event={"ID":"6ea4b230-5ebc-4712-88e0-ce48acfc4785","Type":"ContainerStarted","Data":"48b00c29c217ddb68b1a5a87370d742f0fae5a672e3347d48d36f30c5aa0722d"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.504400 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" event={"ID":"90d2d2e9-b85f-46b8-b768-a59ebd9fd423","Type":"ContainerStarted","Data":"ce7f63badb24e05223ca294598e40ae82df21b8c3cf6df677c6caaf0d0c37ba9"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.515068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mcwnk" event={"ID":"f9f98e83-4853-4d43-bf81-09795442acc8","Type":"ContainerStarted","Data":"ad88062a40c9996c635fd2e473d95bdc62b642e212f6b82b7a05c63976249527"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.518269 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.521105 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.021046815 +0000 UTC m=+173.665683585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.524178 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:11 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:11 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:11 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.524407 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.531207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"f55cc82aaa2d8bf4dcd503e18bf7ad8d0b3fae62bcea25e83cdb617c7fc6764b"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.532229 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" event={"ID":"1d00dceb-f9c4-4c49-a631-ea69008c387a","Type":"ContainerStarted","Data":"104326e0965902be6346d974daaf4f58a782a44ad7786cf764b87875e2400306"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.546384 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" event={"ID":"0ade6e3e-6274-4469-af6f-39455fd721db","Type":"ContainerStarted","Data":"035a86284c8086780e24e2b4c98eb0f1dc2aacdb070239b9b9d5b1fe1ab8996c"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.553228 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" event={"ID":"ee31f112-5156-4239-a760-fb4c6bb9673d","Type":"ContainerStarted","Data":"86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.553280 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" event={"ID":"ee31f112-5156-4239-a760-fb4c6bb9673d","Type":"ContainerStarted","Data":"abf150712433e6a69bcdbac96eb8f5a7e4f4678220a199cb5fef1de1079707b8"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.553299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.559739 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xl8hj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.559800 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.573303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" event={"ID":"ab9815bf-1049-47c8-8eda-cf2602f2eb83","Type":"ContainerStarted","Data":"ebbb35a369b9723fdfeb34f546ac806481285e12e0053e2c255a12c42d7b4ce5"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.582707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" event={"ID":"78130644-70b6-4285-9ca7-e5a671bd1111","Type":"ContainerStarted","Data":"e3a339061df5e2b8d2778dd0a6334b4aea0b9e977556e43022ce4cb22949d68a"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.609952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" event={"ID":"f75d2e36-7785-4a76-8dfb-55227d418d19","Type":"ContainerStarted","Data":"050e31b91b5c7dedb132b86359245fc27b27608b0eca63aea8d88b7743f2102c"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.628056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.628533 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.128515948 +0000 UTC m=+173.773152718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.628791 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" event={"ID":"77160080-14bd-4f22-875d-ec53c922a9ca","Type":"ContainerStarted","Data":"b3271718de5d10823c1d8cb58a92daa70441d4c0775319d6b1e4703935350e20"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.646486 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" event={"ID":"e1a1dc5f-b886-4775-a090-0fe774fb23ed","Type":"ContainerStarted","Data":"9a84fa8773f7e9db4e69af3f2e4d4a7f1d9c4fa3d59d3f393762bacab2a6e295"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.686631 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" event={"ID":"b1cf41b3-7232-4a16-ad7f-0a686f1653dd","Type":"ContainerStarted","Data":"6e68930c6153f915b6348da6c34758a9e61c28fb9d9f8ea15c928685e6fa7eaa"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.702195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" event={"ID":"cc58cc97-069b-4691-88ed-cc2788096a6e","Type":"ContainerStarted","Data":"b6f4a2048a87e6162f6fa89fc21de966dcd24b8e327545bd8e8222d7be8856e4"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.728953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" event={"ID":"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df","Type":"ContainerStarted","Data":"695ac9f52b74597d91419d9495d815281ffe5909b9759b0c54c81a9a495ced4a"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.733891 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" event={"ID":"18ef05f5-ba54-4dfe-adeb-32ed86dfce28","Type":"ContainerStarted","Data":"07691362d822347b63329bcaddc3fa54623ad2dc54914261820216d6f58bea84"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.735507 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zqdwm" event={"ID":"f62540d0-1acd-4266-9738-f0fdc72f47d0","Type":"ContainerStarted","Data":"8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.735544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zqdwm" event={"ID":"f62540d0-1acd-4266-9738-f0fdc72f47d0","Type":"ContainerStarted","Data":"d23da6374bff6b7548ad4e5c369db95c776162120875c848b3e93ff08178cc90"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.736474 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.737342 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cvd9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.737404 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" podUID="5daf4eab-ca30-4ea4-9eb0-6cc5f06877df" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.737794 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.737881 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.738211 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.740078 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.240054032 +0000 UTC m=+173.884690812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.742238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" event={"ID":"66b506ef-4fcb-4bdc-bf47-f875c04441c0","Type":"ContainerStarted","Data":"d1fdfac94c4c8e5070c0087162537744ebf696ef57e2c9dbf6436561d3332c70"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.742294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" event={"ID":"66b506ef-4fcb-4bdc-bf47-f875c04441c0","Type":"ContainerStarted","Data":"b1afad876e6ed59371949ca690c9be076c296157567deb4db55b6d1b5af60fab"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.749524 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" podStartSLOduration=148.749496345 podStartE2EDuration="2m28.749496345s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.748716305 +0000 UTC m=+173.393353075" watchObservedRunningTime="2026-02-02 14:36:11.749496345 +0000 UTC m=+173.394133115" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.774460 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" event={"ID":"e73f227e-ad7c-4212-abd9-e844916c0a17","Type":"ContainerStarted","Data":"a4f243ea36089108322d4774b6549376f8fc0975b0592e48b51a9217c0a2c5a4"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.774524 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" event={"ID":"e73f227e-ad7c-4212-abd9-e844916c0a17","Type":"ContainerStarted","Data":"b36aaa97185006a917472ea03de586d6f90904ce211371d7414969abd8f9b5ef"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.793435 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" event={"ID":"f89cdf2d-50e4-4089-8345-f11f7791826d","Type":"ContainerStarted","Data":"44d49cb542cb7e83665ac7047938745e010bed6c9bb57eedbf13e90ff0bb7b43"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.793525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" event={"ID":"f89cdf2d-50e4-4089-8345-f11f7791826d","Type":"ContainerStarted","Data":"c522a244dc219f2146fe9387acd94329baa87923bbfc07b4d56bf7d9e6bf93d6"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.803899 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" podStartSLOduration=148.803880648 podStartE2EDuration="2m28.803880648s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.796888975 +0000 UTC m=+173.441525745" watchObservedRunningTime="2026-02-02 14:36:11.803880648 +0000 UTC m=+173.448517418" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.808694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" event={"ID":"8a76e81a-7f92-4baf-9604-1e1c011da3a0","Type":"ContainerStarted","Data":"8dcfd1eaef857715f398fc182b60f85d2107322e48bbc0dae26b995242b9ba42"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.809830 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.819641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" event={"ID":"31732c2e-e945-4fb4-b471-175489c076c4","Type":"ContainerStarted","Data":"42d93e3cbc22074c8226f82035e7a4b8ff016cab9732728da5e2ecc14ab3f7ad"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.821740 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-dxvvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.821802 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" podUID="3d2cef1c-ff45-4005-8550-4d87d4601dbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.841558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.843993 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.343975817 +0000 UTC m=+173.988612587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.853007 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wnc44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.853056 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" podUID="8a76e81a-7f92-4baf-9604-1e1c011da3a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.872426 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" podStartSLOduration=149.87240848 podStartE2EDuration="2m29.87240848s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.869853667 +0000 UTC m=+173.514490437" watchObservedRunningTime="2026-02-02 14:36:11.87240848 +0000 UTC m=+173.517045250" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.921125 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" podStartSLOduration=149.921098131 podStartE2EDuration="2m29.921098131s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.920557878 +0000 UTC m=+173.565194658" watchObservedRunningTime="2026-02-02 14:36:11.921098131 +0000 UTC m=+173.565734901" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.950981 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" podStartSLOduration=148.950958989 podStartE2EDuration="2m28.950958989s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.946296504 +0000 UTC m=+173.590933274" watchObservedRunningTime="2026-02-02 14:36:11.950958989 +0000 UTC m=+173.595595759" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.955617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.957824 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.457803668 +0000 UTC m=+174.102440438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.964290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.970580 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.470559743 +0000 UTC m=+174.115196693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.974101 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" podStartSLOduration=149.97408361 podStartE2EDuration="2m29.97408361s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.973630189 +0000 UTC m=+173.618266959" watchObservedRunningTime="2026-02-02 14:36:11.97408361 +0000 UTC m=+173.618720370" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.048211 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zqdwm" podStartSLOduration=150.048192809 podStartE2EDuration="2m30.048192809s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:12.020556817 +0000 UTC m=+173.665193587" watchObservedRunningTime="2026-02-02 14:36:12.048192809 +0000 UTC m=+173.692829579" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.056538 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" podStartSLOduration=149.056516845 podStartE2EDuration="2m29.056516845s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:12.047232065 +0000 UTC m=+173.691868835" watchObservedRunningTime="2026-02-02 14:36:12.056516845 +0000 UTC m=+173.701153615" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.066010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.066542 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.566520502 +0000 UTC m=+174.211157272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.084990 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" podStartSLOduration=150.084965307 podStartE2EDuration="2m30.084965307s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:12.075285868 +0000 UTC m=+173.719922638" watchObservedRunningTime="2026-02-02 14:36:12.084965307 +0000 UTC m=+173.729602097" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.094792 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 14:31:11 +0000 UTC, rotation deadline is 2026-11-23 02:01:11.556120623 +0000 UTC Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.094824 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7043h24m59.461298363s for next certificate rotation Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.168571 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.169089 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.669064763 +0000 UTC m=+174.313701533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.275085 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.275722 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.775693587 +0000 UTC m=+174.420330357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.377590 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.378086 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.878069694 +0000 UTC m=+174.522706464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.478851 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.479153 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.979109169 +0000 UTC m=+174.623745939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.479237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.479956 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.979891727 +0000 UTC m=+174.624528498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.495148 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.495222 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.512853 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:12 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:12 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:12 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.512938 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.553055 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.553103 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.554323 4869 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4hhbx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.24:8443/livez\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.554410 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" podUID="78130644-70b6-4285-9ca7-e5a671bd1111" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.24:8443/livez\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.580101 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.580373 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.080323027 +0000 UTC m=+174.724959797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.682514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.683036 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.183014882 +0000 UTC m=+174.827651652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.725329 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.785551 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.785748 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.285708799 +0000 UTC m=+174.930345569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.785819 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.786246 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.286238021 +0000 UTC m=+174.930874791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.842547 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" event={"ID":"ab9815bf-1049-47c8-8eda-cf2602f2eb83","Type":"ContainerStarted","Data":"e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.850738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" event={"ID":"f75d2e36-7785-4a76-8dfb-55227d418d19","Type":"ContainerStarted","Data":"ef8720acbbb0cde38cd11f20ddc3b5bbe8043425fdcbdb9c0466357c3eb84c72"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.856491 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" event={"ID":"77160080-14bd-4f22-875d-ec53c922a9ca","Type":"ContainerStarted","Data":"cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.857481 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.862382 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" event={"ID":"debcc43e-e06f-486a-af8c-6a9d4d553913","Type":"ContainerStarted","Data":"79a060c65a071c8a6eac94dc82b8c5d175aa78c407291049a9ac6b9c662bbb68"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.868145 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wkkx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.868184 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.882781 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" event={"ID":"1d00dceb-f9c4-4c49-a631-ea69008c387a","Type":"ContainerStarted","Data":"e7b47fb05dc07563c6e17e3f38cda928b37cca11fcd6eb86f6712a8323f47042"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.886871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.887228 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.387212404 +0000 UTC m=+175.031849174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.889283 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" podStartSLOduration=150.889262575 podStartE2EDuration="2m30.889262575s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:12.883066482 +0000 UTC m=+174.527703252" watchObservedRunningTime="2026-02-02 14:36:12.889262575 +0000 UTC m=+174.533899345" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.895273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" event={"ID":"0ade6e3e-6274-4469-af6f-39455fd721db","Type":"ContainerStarted","Data":"5bdb58e3c8554e2e107e0a7bd7602f9f2c1fb7c1de002538f6347cee6a529395"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.896439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" event={"ID":"b1cf41b3-7232-4a16-ad7f-0a686f1653dd","Type":"ContainerStarted","Data":"f37dab21bb8c799a1fb48bfe7e098a1d7a0a48c1c7e0f9758ad1f7da6a9820fd"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.919448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" event={"ID":"18ef05f5-ba54-4dfe-adeb-32ed86dfce28","Type":"ContainerStarted","Data":"0b0d898dea99ae6130b83a51c70ce6a281543fdcf40703ef20b467bd4b5016f4"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.921673 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.921748 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mm87w container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.921782 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" podUID="18ef05f5-ba54-4dfe-adeb-32ed86dfce28" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.977372 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mcwnk" event={"ID":"f9f98e83-4853-4d43-bf81-09795442acc8","Type":"ContainerStarted","Data":"4cc7d7ac633cd7881e6e9539601545b2ba3d9d5a888752312433e2fd7df21bf0"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.994061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" event={"ID":"90d2d2e9-b85f-46b8-b768-a59ebd9fd423","Type":"ContainerStarted","Data":"40517cccc8efefaf1477fcaf7a8cd3a66f7382893197e2ea8c5536d52860bf2c"} Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.990263 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.490225398 +0000 UTC m=+175.134862168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.989737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.031687 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" event={"ID":"c40fc5ef-7c09-46e1-808d-f388cba3a5e3","Type":"ContainerStarted","Data":"9d4299dd4ee149891ee67857fd20408464197200a25f1484ca8f9abbe611699c"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.036430 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" event={"ID":"e1a1dc5f-b886-4775-a090-0fe774fb23ed","Type":"ContainerStarted","Data":"df007b47b50059c9e35f662246defb9d24cdf2981d4b8eebd50d0d27504470a2"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.047225 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" event={"ID":"ca2f1c29-72b6-4768-8245-c5db262d052a","Type":"ContainerStarted","Data":"faa857b149c345bd8bfa07adb91b3ffbe87eccda487e78297704f8b5002e9979"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.048350 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.050806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" event={"ID":"31732c2e-e945-4fb4-b471-175489c076c4","Type":"ContainerStarted","Data":"372d1ca5d39707b24abc420abf781fd41d51eddec701ad88b11b90dd08baed28"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.072352 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podStartSLOduration=150.072331155 podStartE2EDuration="2m30.072331155s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.006951151 +0000 UTC m=+174.651587921" watchObservedRunningTime="2026-02-02 14:36:13.072331155 +0000 UTC m=+174.716967925" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.095610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.095865 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.595819414 +0000 UTC m=+175.240456184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.096353 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.097048 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.597025384 +0000 UTC m=+175.241662154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.116027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" event={"ID":"a72caff3-6c15-4b44-9821-ed7b30a13b58","Type":"ContainerStarted","Data":"795f26ae7c23f2ca59379d8d860dbf52ed4a817bae1c93536e11d7327f2b272a"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.136003 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" podStartSLOduration=151.135980356 podStartE2EDuration="2m31.135980356s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.116561727 +0000 UTC m=+174.761198497" watchObservedRunningTime="2026-02-02 14:36:13.135980356 +0000 UTC m=+174.780617126" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.145238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" event={"ID":"cc58cc97-069b-4691-88ed-cc2788096a6e","Type":"ContainerStarted","Data":"ac62dba72a848cdafce7b31bdccf24a47e3c364fd51e800d5894de97bac8717d"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.160381 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" podStartSLOduration=151.160363498 podStartE2EDuration="2m31.160363498s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.159517058 +0000 UTC m=+174.804153838" watchObservedRunningTime="2026-02-02 14:36:13.160363498 +0000 UTC m=+174.805000268" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.182391 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" event={"ID":"6ea4b230-5ebc-4712-88e0-ce48acfc4785","Type":"ContainerStarted","Data":"7729c375d16c72e8236ce14da691bfecff9d17b641c9dead88ab01677f5f85e3"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.183213 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.183271 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.184023 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wnc44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.184137 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xl8hj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.184130 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" podUID="8a76e81a-7f92-4baf-9604-1e1c011da3a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.184192 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.199316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.200500 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.700484489 +0000 UTC m=+175.345121259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.208965 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" podStartSLOduration=150.208932197 podStartE2EDuration="2m30.208932197s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.194221165 +0000 UTC m=+174.838857925" watchObservedRunningTime="2026-02-02 14:36:13.208932197 +0000 UTC m=+174.853568967" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.212852 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.219640 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.254414 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" podStartSLOduration=150.25438091 podStartE2EDuration="2m30.25438091s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.252047762 +0000 UTC m=+174.896684522" watchObservedRunningTime="2026-02-02 14:36:13.25438091 +0000 UTC m=+174.899017680" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.289806 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" podStartSLOduration=150.289777884 podStartE2EDuration="2m30.289777884s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.287018976 +0000 UTC m=+174.931655746" watchObservedRunningTime="2026-02-02 14:36:13.289777884 +0000 UTC m=+174.934414664" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.301834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.306845 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.806825694 +0000 UTC m=+175.451462464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.325555 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" podStartSLOduration=151.325519786 podStartE2EDuration="2m31.325519786s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.321641751 +0000 UTC m=+174.966278521" watchObservedRunningTime="2026-02-02 14:36:13.325519786 +0000 UTC m=+174.970156546" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.412640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.415434 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.915386695 +0000 UTC m=+175.560023465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.418689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.419702 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.919681971 +0000 UTC m=+175.564318741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.420843 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" podStartSLOduration=151.420786828 podStartE2EDuration="2m31.420786828s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.417542418 +0000 UTC m=+175.062179208" watchObservedRunningTime="2026-02-02 14:36:13.420786828 +0000 UTC m=+175.065423598" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.481857 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" podStartSLOduration=150.481828125 podStartE2EDuration="2m30.481828125s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.477122609 +0000 UTC m=+175.121759389" watchObservedRunningTime="2026-02-02 14:36:13.481828125 +0000 UTC m=+175.126464895" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.518342 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:13 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:13 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:13 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.518470 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.520011 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.520810 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.020786007 +0000 UTC m=+175.665422787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.608779 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" podStartSLOduration=150.608753199 podStartE2EDuration="2m30.608753199s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.603485669 +0000 UTC m=+175.248122429" watchObservedRunningTime="2026-02-02 14:36:13.608753199 +0000 UTC m=+175.253389969" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.621956 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.622550 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.122525669 +0000 UTC m=+175.767162439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.723079 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.723680 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.223658176 +0000 UTC m=+175.868294946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.825419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.826145 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.326120286 +0000 UTC m=+175.970757056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.926939 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.927365 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.427342724 +0000 UTC m=+176.071979494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.029387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.029882 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.529864235 +0000 UTC m=+176.174501005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.131348 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.131801 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.63171126 +0000 UTC m=+176.276348040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.132275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.132988 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.632952721 +0000 UTC m=+176.277589491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.223401 4869 generic.go:334] "Generic (PLEG): container finished" podID="debcc43e-e06f-486a-af8c-6a9d4d553913" containerID="79a060c65a071c8a6eac94dc82b8c5d175aa78c407291049a9ac6b9c662bbb68" exitCode=0 Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.223486 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" event={"ID":"debcc43e-e06f-486a-af8c-6a9d4d553913","Type":"ContainerDied","Data":"79a060c65a071c8a6eac94dc82b8c5d175aa78c407291049a9ac6b9c662bbb68"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.234929 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.235459 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.735429341 +0000 UTC m=+176.380066111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.235851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"431fcb70cc98461d103c7d616c03636fbcbfad85bee6bb13d436e2e8654f0988"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.235979 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.236665 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.736647761 +0000 UTC m=+176.381284531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.249256 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" event={"ID":"1d00dceb-f9c4-4c49-a631-ea69008c387a","Type":"ContainerStarted","Data":"dd97d4a06a90cd2cda4f8644b12c3149169049a2f7ded09da0000e4775e24d6f"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.255357 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" event={"ID":"0ade6e3e-6274-4469-af6f-39455fd721db","Type":"ContainerStarted","Data":"2fe24b2358acc507cb164f64c0ef048b0918ef9839bfde0a0b2b8cdbf6f926ca"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.261212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" event={"ID":"b1cf41b3-7232-4a16-ad7f-0a686f1653dd","Type":"ContainerStarted","Data":"216312852a9f884101982c9754e0108b3105ec374289ba9e25ba29f1e483c3a5"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.272426 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wkkx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.272723 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.273329 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mcwnk" event={"ID":"f9f98e83-4853-4d43-bf81-09795442acc8","Type":"ContainerStarted","Data":"06922e5520bf22f7f5d842b5c1203fcdfd0d3eb01fafd05a614a43cd41b01c4e"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.275703 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.280964 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mm87w container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.281060 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" podUID="18ef05f5-ba54-4dfe-adeb-32ed86dfce28" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.337539 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.338107 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.838071126 +0000 UTC m=+176.482707946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.338732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.343120 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.843096189 +0000 UTC m=+176.487732959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.386789 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" podStartSLOduration=152.386756937 podStartE2EDuration="2m32.386756937s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:14.383401684 +0000 UTC m=+176.028038454" watchObservedRunningTime="2026-02-02 14:36:14.386756937 +0000 UTC m=+176.031393707" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.440397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.443867 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.943834286 +0000 UTC m=+176.588471216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.476978 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mcwnk" podStartSLOduration=9.476941574 podStartE2EDuration="9.476941574s" podCreationTimestamp="2026-02-02 14:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:14.437708646 +0000 UTC m=+176.082345446" watchObservedRunningTime="2026-02-02 14:36:14.476941574 +0000 UTC m=+176.121578344" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.511349 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" podStartSLOduration=151.511316342 podStartE2EDuration="2m31.511316342s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:14.479070726 +0000 UTC m=+176.123707506" watchObservedRunningTime="2026-02-02 14:36:14.511316342 +0000 UTC m=+176.155953112" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.512073 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:14 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:14 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:14 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.512151 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.512827 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" podStartSLOduration=151.512819219 podStartE2EDuration="2m31.512819219s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:14.510451042 +0000 UTC m=+176.155087812" watchObservedRunningTime="2026-02-02 14:36:14.512819219 +0000 UTC m=+176.157455989" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.544228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.544810 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.044785809 +0000 UTC m=+176.689422579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.585204 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.645611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.645861 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.145832933 +0000 UTC m=+176.790469703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.646183 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.646621 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.146611843 +0000 UTC m=+176.791248613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.750469 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.751236 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.251204606 +0000 UTC m=+176.895841376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.852643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.853238 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.353219193 +0000 UTC m=+176.997855963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.955570 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.955744 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.455710075 +0000 UTC m=+177.100346855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.956009 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.956506 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.456491453 +0000 UTC m=+177.101128223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.057508 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.057755 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.557707283 +0000 UTC m=+177.202344053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.058305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.058830 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.5588182 +0000 UTC m=+177.203454970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.124782 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.126005 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.128648 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.143687 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.160595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.161052 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.661031523 +0000 UTC m=+177.305668293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.262226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.262306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.262333 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.262381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.262813 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.762797496 +0000 UTC m=+177.407434266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.278704 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" event={"ID":"debcc43e-e06f-486a-af8c-6a9d4d553913","Type":"ContainerStarted","Data":"08b218f97c320580457a90382097567e64a984def27625ca3e5653ef269c19ed"} Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.279548 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.304617 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.304694 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.321309 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.327732 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" podStartSLOduration=153.327707948 podStartE2EDuration="2m33.327707948s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:15.315150508 +0000 UTC m=+176.959787278" watchObservedRunningTime="2026-02-02 14:36:15.327707948 +0000 UTC m=+176.972344718" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.352592 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.354688 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.354853 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.364262 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.369634 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.370379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.370534 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.370629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.371802 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.871774957 +0000 UTC m=+177.516411727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.373254 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.384056 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.434851 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.444612 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.478365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.478481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.478552 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.478598 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.479106 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.979085676 +0000 UTC m=+177.623722456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.515463 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:15 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:15 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:15 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.516033 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.543094 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.544658 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.580198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.580649 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.580729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.580778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.582237 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.082213682 +0000 UTC m=+177.726850452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.582795 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.583077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.592168 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.630191 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.684589 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.184568159 +0000 UTC m=+177.829204939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.684840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.684976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.685055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.685098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.696402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.748292 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.749838 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.757270 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.789660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.789918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.790028 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.790056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.790542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.790624 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.290607118 +0000 UTC m=+177.935243878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.790943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.843119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.893056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.893124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.893180 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.893199 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.893529 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.393515778 +0000 UTC m=+178.038152548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.905177 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.915615 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.995963 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.996186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.996251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.996273 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.996633 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.496618834 +0000 UTC m=+178.141255604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.997879 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.998183 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.033736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.090821 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.098739 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.099435 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.599415532 +0000 UTC m=+178.244052312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.202431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.203163 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.703147783 +0000 UTC m=+178.347784543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.278202 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.304008 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.304455 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.804421793 +0000 UTC m=+178.449058553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.368930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"fa36614e15907890a42ef404912d31f1c698eb5a63732a6a7df259babae4ecab"} Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.387638 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.413463 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.413880 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.913862005 +0000 UTC m=+178.558498775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.518984 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:16 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:16 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:16 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.519422 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.521302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.523625 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.023607675 +0000 UTC m=+178.668244445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.624441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.624897 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.124875115 +0000 UTC m=+178.769511875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.728431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.731379 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.231346934 +0000 UTC m=+178.875983704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.767700 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.780248 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.841768 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.842061 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.342028617 +0000 UTC m=+178.986665387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.842379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.842970 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.34296124 +0000 UTC m=+178.987598010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.943575 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.943811 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.443773399 +0000 UTC m=+179.088410169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.944030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.944439 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.444429795 +0000 UTC m=+179.089066565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.044683 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.044947 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.544894075 +0000 UTC m=+179.189530855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.045071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.045486 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.545470729 +0000 UTC m=+179.190107499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.146104 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.146308 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.646281238 +0000 UTC m=+179.290918008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.146395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.146723 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.646715749 +0000 UTC m=+179.291352509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.247244 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.248000 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.747970679 +0000 UTC m=+179.392607459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.338845 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.340118 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.345051 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.349379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.349619 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.349879 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.350018 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.350481 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.850458529 +0000 UTC m=+179.495095349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.360454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.363854 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.363933 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.365523 4869 patch_prober.go:28] interesting pod/console-f9d7485db-ptmkd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.365601 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ptmkd" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.396376 4869 generic.go:334] "Generic (PLEG): container finished" podID="e56fa221-6e79-4c96-be0a-17db4803a127" containerID="b2450dd93a7c78de896bbf627e97911c1993d1380dd59859505aa8d294fc3f44" exitCode=0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.396526 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerDied","Data":"b2450dd93a7c78de896bbf627e97911c1993d1380dd59859505aa8d294fc3f44"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.396575 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerStarted","Data":"3fdc2755e50c40ab06f7338836dcc4d68f5937d9bf9ebd941d8d98f6a64dcd17"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.401170 4869 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.402351 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.407478 4869 generic.go:334] "Generic (PLEG): container finished" podID="35334030-48c7-4d7e-b202-75371c2c74f0" containerID="cec776d323dbe8236b1c9db4384ebac1fa16daa022330512eaace0844c3b9f88" exitCode=0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.407561 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerDied","Data":"cec776d323dbe8236b1c9db4384ebac1fa16daa022330512eaace0844c3b9f88"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.407601 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerStarted","Data":"8d9df88387111e57bb9b1545d6cad7ddb2c341d0c3125931bf95ce3cfbbe8249"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.413878 4869 generic.go:334] "Generic (PLEG): container finished" podID="20990512-5147-4de8-95e0-f40e2156f395" containerID="2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb" exitCode=0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.413982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerDied","Data":"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.414023 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerStarted","Data":"63b62c3c310182414e285b775897296c2f662f58b08903ff210519308baba3a6"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.432330 4869 generic.go:334] "Generic (PLEG): container finished" podID="2c21252d-a76f-437f-8611-f42993137df3" containerID="f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2" exitCode=0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.432454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerDied","Data":"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.432494 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerStarted","Data":"ab3d419e69ab359ef2eb23e842d3d4f04eb05500497bb827ac7bf3115cbf4af4"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.451346 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.453626 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.953584916 +0000 UTC m=+179.598221686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.453843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.454023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.454139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.454345 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.954333464 +0000 UTC m=+179.598970414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.454501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.455077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.456484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.456499 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.457645 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"efd99c9a4c72d1179ce8abb941e3dfc8599952e3dae1a7cc1ace6774a6786c46"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.492019 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.515204 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:17 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:17 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:17 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.515304 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.566251 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.566526 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:18.066486543 +0000 UTC m=+179.711123313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.566665 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.568730 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:18.068707078 +0000 UTC m=+179.713344038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.586117 4869 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4hhbx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]log ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]etcd ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/max-in-flight-filter ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 14:36:17 crc kubenswrapper[4869]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-startinformers ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 14:36:17 crc kubenswrapper[4869]: livez check failed Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.586212 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" podUID="78130644-70b6-4285-9ca7-e5a671bd1111" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.661116 4869 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T14:36:17.401197022Z","Handler":null,"Name":""} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.661384 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.667106 4869 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.667314 4869 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.667733 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.677197 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.718004 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.744069 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.744325 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.773123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.773251 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.773306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.773334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.779090 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.779168 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.846178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.876874 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.876960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.876984 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.878052 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.880296 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.897764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.901838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.960492 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.076811 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.158624 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:36:18 crc kubenswrapper[4869]: W0202 14:36:18.192446 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe54b4f_c3d6_40ec_8d5d_422b6d86ad97.slice/crio-01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf WatchSource:0}: Error finding container 01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf: Status 404 returned error can't find the container with id 01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.211210 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.326099 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.327670 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.331803 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.331890 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.331970 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.346842 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.348444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.351432 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.361349 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.378489 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.486714 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" event={"ID":"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97","Type":"ContainerStarted","Data":"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.486808 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" event={"ID":"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97","Type":"ContainerStarted","Data":"01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.488409 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.489168 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491568 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491636 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491669 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491739 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.497481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"13780c7c2507648136ea93745567cc7dd4a9423d873dcf52722b800ccb531c6b"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.504580 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.510437 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:18 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:18 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:18 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.510493 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.524844 4869 generic.go:334] "Generic (PLEG): container finished" podID="7bc37994-d436-4a72-93dd-610683ab871f" containerID="cdd5576f9f5156d7b56f7ccd77833310c25ec9af1f7cd6b12b8a45a03d8370d2" exitCode=0 Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.525172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerDied","Data":"cdd5576f9f5156d7b56f7ccd77833310c25ec9af1f7cd6b12b8a45a03d8370d2"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.525228 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerStarted","Data":"b1580b4316ca71373b5cb2c825bf6078883c98f4a09960236d48783fdf4eb2b0"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.537247 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" podStartSLOduration=156.537212271 podStartE2EDuration="2m36.537212271s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:18.524807374 +0000 UTC m=+180.169444144" watchObservedRunningTime="2026-02-02 14:36:18.537212271 +0000 UTC m=+180.181849041" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.538706 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.562614 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" podStartSLOduration=13.562574056999999 podStartE2EDuration="13.562574057s" podCreationTimestamp="2026-02-02 14:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:18.55135967 +0000 UTC m=+180.195996440" watchObservedRunningTime="2026-02-02 14:36:18.562574057 +0000 UTC m=+180.207210827" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.576263 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.576350 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:18 crc kubenswrapper[4869]: W0202 14:36:18.576848 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod442e63b3_7f70_4524_b229_aedfb054f395.slice/crio-1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2 WatchSource:0}: Error finding container 1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2: Status 404 returned error can't find the container with id 1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2 Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.578798 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.582824 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593333 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593432 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593500 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.594980 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.595263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.602330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.635164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.650412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.669516 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.684739 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.754392 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.756334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.763529 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.904354 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.904414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.904512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.006615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.006681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.009177 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.009317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.009309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.033423 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.078640 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.408747 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.525482 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:19 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:19 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:19 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.525550 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.531276 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.531820 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 14:36:19 crc kubenswrapper[4869]: W0202 14:36:19.553801 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podff46a125_ff31_42f7_9a16_3eccdd7dd393.slice/crio-b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a WatchSource:0}: Error finding container b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a: Status 404 returned error can't find the container with id b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.587480 4869 generic.go:334] "Generic (PLEG): container finished" podID="442e63b3-7f70-4524-b229-aedfb054f395" containerID="9fde05ff8b3ab7b33bf7fd64de1786d6d6c5b221f2074b9b8d881ce96c0861b1" exitCode=0 Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.589006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerDied","Data":"9fde05ff8b3ab7b33bf7fd64de1786d6d6c5b221f2074b9b8d881ce96c0861b1"} Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.589049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerStarted","Data":"1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2"} Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.643539 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerStarted","Data":"4b24ce2f2248f4687d66222d8d64c3f4c7ab1a667da994a65103b5daf7f6074a"} Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.778761 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:36:19 crc kubenswrapper[4869]: W0202 14:36:19.856347 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e119c7_dd08_471f_9800_5bda7b22a6d6.slice/crio-9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883 WatchSource:0}: Error finding container 9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883: Status 404 returned error can't find the container with id 9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883 Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.976316 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.978046 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.981741 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.981759 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.988454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.163195 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.163375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.264808 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.264934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.265094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.311335 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.513336 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:20 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:20 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:20 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.513652 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.599346 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.647558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff46a125-ff31-42f7-9a16-3eccdd7dd393","Type":"ContainerStarted","Data":"4b8dc8f4396db1dcb28c3807745cf0ef5dad421ac82661e4237038d651a54858"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.647621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff46a125-ff31-42f7-9a16-3eccdd7dd393","Type":"ContainerStarted","Data":"b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.656301 4869 generic.go:334] "Generic (PLEG): container finished" podID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerID="5761dc2d2fafda3cf6b457c2de25d204c006ac8d85953364b9966521a437f222" exitCode=0 Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.656405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerDied","Data":"5761dc2d2fafda3cf6b457c2de25d204c006ac8d85953364b9966521a437f222"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.656446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerStarted","Data":"9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.663666 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerID="5bd8c5ee8e9e88d2880af3adebbdb0e7854ddadb441729295abb6d7e6958afdd" exitCode=0 Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.664263 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerDied","Data":"5bd8c5ee8e9e88d2880af3adebbdb0e7854ddadb441729295abb6d7e6958afdd"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.698365 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.698335108 podStartE2EDuration="2.698335108s" podCreationTimestamp="2026-02-02 14:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:20.672669055 +0000 UTC m=+182.317305825" watchObservedRunningTime="2026-02-02 14:36:20.698335108 +0000 UTC m=+182.342971878" Feb 02 14:36:20 crc kubenswrapper[4869]: E0202 14:36:20.708587 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab9815bf_1049_47c8_8eda_cf2602f2eb83.slice/crio-conmon-e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab9815bf_1049_47c8_8eda_cf2602f2eb83.slice/crio-e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f.scope\": RecentStats: unable to find data in memory cache]" Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.057422 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 14:36:21 crc kubenswrapper[4869]: W0202 14:36:21.118320 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0fa6bddf_2294_4b66_816d_1bdaf3cd3c93.slice/crio-56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052 WatchSource:0}: Error finding container 56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052: Status 404 returned error can't find the container with id 56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052 Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.509950 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:21 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:21 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:21 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.510047 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.696776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93","Type":"ContainerStarted","Data":"56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052"} Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.703727 4869 generic.go:334] "Generic (PLEG): container finished" podID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" containerID="e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f" exitCode=0 Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.703783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" event={"ID":"ab9815bf-1049-47c8-8eda-cf2602f2eb83","Type":"ContainerDied","Data":"e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f"} Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.709494 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff46a125-ff31-42f7-9a16-3eccdd7dd393" containerID="4b8dc8f4396db1dcb28c3807745cf0ef5dad421ac82661e4237038d651a54858" exitCode=0 Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.709571 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff46a125-ff31-42f7-9a16-3eccdd7dd393","Type":"ContainerDied","Data":"4b8dc8f4396db1dcb28c3807745cf0ef5dad421ac82661e4237038d651a54858"} Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.512254 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:22 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:22 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:22 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.512338 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.559570 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.564928 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.746340 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93","Type":"ContainerStarted","Data":"2a21e2516607900a6ee89e7cab6b19874f814d0f0ac5236718de9219148f8503"} Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.179608 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.180876 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.212522 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.212493992 podStartE2EDuration="4.212493992s" podCreationTimestamp="2026-02-02 14:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:22.770385086 +0000 UTC m=+184.415021856" watchObservedRunningTime="2026-02-02 14:36:23.212493992 +0000 UTC m=+184.857130762" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") pod \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322458 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") pod \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322476 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") pod \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") pod \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") pod \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.326306 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff46a125-ff31-42f7-9a16-3eccdd7dd393" (UID: "ff46a125-ff31-42f7-9a16-3eccdd7dd393"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.327039 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab9815bf-1049-47c8-8eda-cf2602f2eb83" (UID: "ab9815bf-1049-47c8-8eda-cf2602f2eb83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.336312 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff46a125-ff31-42f7-9a16-3eccdd7dd393" (UID: "ff46a125-ff31-42f7-9a16-3eccdd7dd393"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.348122 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl" (OuterVolumeSpecName: "kube-api-access-wwxkl") pod "ab9815bf-1049-47c8-8eda-cf2602f2eb83" (UID: "ab9815bf-1049-47c8-8eda-cf2602f2eb83"). InnerVolumeSpecName "kube-api-access-wwxkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.357195 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab9815bf-1049-47c8-8eda-cf2602f2eb83" (UID: "ab9815bf-1049-47c8-8eda-cf2602f2eb83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424406 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424446 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424457 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424465 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424475 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.509311 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:23 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:23 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:23 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.509429 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.536745 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.804635 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" event={"ID":"ab9815bf-1049-47c8-8eda-cf2602f2eb83","Type":"ContainerDied","Data":"ebbb35a369b9723fdfeb34f546ac806481285e12e0053e2c255a12c42d7b4ce5"} Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.804719 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebbb35a369b9723fdfeb34f546ac806481285e12e0053e2c255a12c42d7b4ce5" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.804756 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.826892 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.827244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff46a125-ff31-42f7-9a16-3eccdd7dd393","Type":"ContainerDied","Data":"b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a"} Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.827273 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a" Feb 02 14:36:24 crc kubenswrapper[4869]: I0202 14:36:24.527848 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:24 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:24 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:24 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:24 crc kubenswrapper[4869]: I0202 14:36:24.527949 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:24 crc kubenswrapper[4869]: I0202 14:36:24.847388 4869 generic.go:334] "Generic (PLEG): container finished" podID="0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" containerID="2a21e2516607900a6ee89e7cab6b19874f814d0f0ac5236718de9219148f8503" exitCode=0 Feb 02 14:36:24 crc kubenswrapper[4869]: I0202 14:36:24.847815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93","Type":"ContainerDied","Data":"2a21e2516607900a6ee89e7cab6b19874f814d0f0ac5236718de9219148f8503"} Feb 02 14:36:25 crc kubenswrapper[4869]: I0202 14:36:25.507771 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:25 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:25 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:25 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:25 crc kubenswrapper[4869]: I0202 14:36:25.507868 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.228928 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.378296 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") pod \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.378501 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") pod \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.379064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" (UID: "0fa6bddf-2294-4b66-816d-1bdaf3cd3c93"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.387260 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" (UID: "0fa6bddf-2294-4b66-816d-1bdaf3cd3c93"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.479965 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.480010 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.509297 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:26 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:26 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:26 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.509400 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.879006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93","Type":"ContainerDied","Data":"56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052"} Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.879059 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.879172 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:27 crc kubenswrapper[4869]: I0202 14:36:27.363994 4869 patch_prober.go:28] interesting pod/console-f9d7485db-ptmkd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 02 14:36:27 crc kubenswrapper[4869]: I0202 14:36:27.364071 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ptmkd" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 02 14:36:27 crc kubenswrapper[4869]: I0202 14:36:27.507277 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:27 crc kubenswrapper[4869]: I0202 14:36:27.512136 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:28 crc kubenswrapper[4869]: I0202 14:36:28.577380 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:28 crc kubenswrapper[4869]: I0202 14:36:28.578514 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:28 crc kubenswrapper[4869]: I0202 14:36:28.577448 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:28 crc kubenswrapper[4869]: I0202 14:36:28.578642 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.745494 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.746653 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" containerID="cri-o://35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4" gracePeriod=30 Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.793439 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.794113 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" containerID="cri-o://cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d" gracePeriod=30 Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.904588 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:36:33 crc kubenswrapper[4869]: I0202 14:36:33.938565 4869 generic.go:334] "Generic (PLEG): container finished" podID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerID="35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4" exitCode=0 Feb 02 14:36:33 crc kubenswrapper[4869]: I0202 14:36:33.938677 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" event={"ID":"aad51ba6-f20d-48b1-b456-c7309cc35bbd","Type":"ContainerDied","Data":"35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4"} Feb 02 14:36:33 crc kubenswrapper[4869]: I0202 14:36:33.941628 4869 generic.go:334] "Generic (PLEG): container finished" podID="77160080-14bd-4f22-875d-ec53c922a9ca" containerID="cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d" exitCode=0 Feb 02 14:36:33 crc kubenswrapper[4869]: I0202 14:36:33.941677 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" event={"ID":"77160080-14bd-4f22-875d-ec53c922a9ca","Type":"ContainerDied","Data":"cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d"} Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.368282 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.372729 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.435014 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2zsv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.435100 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.903721 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.577860 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.577884 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.577937 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.577967 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578008 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578516 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb"} pod="openshift-console/downloads-7954f5f757-zqdwm" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578641 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" containerID="cri-o://8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb" gracePeriod=2 Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578942 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578963 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.646863 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wkkx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.646957 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 02 14:36:39 crc kubenswrapper[4869]: I0202 14:36:39.978130 4869 generic.go:334] "Generic (PLEG): container finished" podID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerID="8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb" exitCode=0 Feb 02 14:36:39 crc kubenswrapper[4869]: I0202 14:36:39.978198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zqdwm" event={"ID":"f62540d0-1acd-4266-9738-f0fdc72f47d0","Type":"ContainerDied","Data":"8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb"} Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.304871 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.305734 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.305802 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.306544 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.306623 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b" gracePeriod=600 Feb 02 14:36:46 crc kubenswrapper[4869]: I0202 14:36:46.034277 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b" exitCode=0 Feb 02 14:36:46 crc kubenswrapper[4869]: I0202 14:36:46.034333 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b"} Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.435042 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2zsv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.435549 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.466203 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.579028 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.579576 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:49 crc kubenswrapper[4869]: I0202 14:36:49.646769 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wkkx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 14:36:49 crc kubenswrapper[4869]: I0202 14:36:49.646881 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.020879 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.021815 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpswn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h9pgx_openshift-marketplace(35334030-48c7-4d7e-b202-75371c2c74f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.023421 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h9pgx" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.131261 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h9pgx" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.349673 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.349852 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cd4wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g6crm_openshift-marketplace(20990512-5147-4de8-95e0-f40e2156f395): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.351270 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g6crm" podUID="20990512-5147-4de8-95e0-f40e2156f395" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.784541 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g6crm" podUID="20990512-5147-4de8-95e0-f40e2156f395" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.835494 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.841746 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.878095 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879729 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff46a125-ff31-42f7-9a16-3eccdd7dd393" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff46a125-ff31-42f7-9a16-3eccdd7dd393" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879782 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879789 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879803 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879812 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879825 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" containerName="collect-profiles" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879834 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" containerName="collect-profiles" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879844 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879853 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879999 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880013 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880025 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff46a125-ff31-42f7-9a16-3eccdd7dd393" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880041 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880054 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" containerName="collect-profiles" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880527 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.887284 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.919964 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920016 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") pod \"77160080-14bd-4f22-875d-ec53c922a9ca\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920035 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920092 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920119 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920140 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") pod \"77160080-14bd-4f22-875d-ec53c922a9ca\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920199 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") pod \"77160080-14bd-4f22-875d-ec53c922a9ca\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") pod \"77160080-14bd-4f22-875d-ec53c922a9ca\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920428 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920564 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.921606 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.921638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca" (OuterVolumeSpecName: "client-ca") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.922081 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config" (OuterVolumeSpecName: "config") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.922709 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config" (OuterVolumeSpecName: "config") pod "77160080-14bd-4f22-875d-ec53c922a9ca" (UID: "77160080-14bd-4f22-875d-ec53c922a9ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.923160 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "77160080-14bd-4f22-875d-ec53c922a9ca" (UID: "77160080-14bd-4f22-875d-ec53c922a9ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.945528 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.945880 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlvm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-h4pkg_openshift-marketplace(442e63b3-7f70-4524-b229-aedfb054f395): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.947674 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-h4pkg" podUID="442e63b3-7f70-4524-b229-aedfb054f395" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.948761 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "77160080-14bd-4f22-875d-ec53c922a9ca" (UID: "77160080-14bd-4f22-875d-ec53c922a9ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.949089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch" (OuterVolumeSpecName: "kube-api-access-mpxch") pod "77160080-14bd-4f22-875d-ec53c922a9ca" (UID: "77160080-14bd-4f22-875d-ec53c922a9ca"). InnerVolumeSpecName "kube-api-access-mpxch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.949626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx" (OuterVolumeSpecName: "kube-api-access-s7sgx") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "kube-api-access-s7sgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.952892 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.959191 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.959396 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l744,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cm44g_openshift-marketplace(e56fa221-6e79-4c96-be0a-17db4803a127): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.960491 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cm44g" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021641 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021672 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021724 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021738 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021751 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021759 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021767 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021775 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021783 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021793 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021801 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.023028 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.023764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.024127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.026658 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44bcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wrnr2_openshift-marketplace(7bc37994-d436-4a72-93dd-610683ab871f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.027678 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.027895 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wrnr2" podUID="7bc37994-d436-4a72-93dd-610683ab871f" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.041745 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.086985 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" event={"ID":"77160080-14bd-4f22-875d-ec53c922a9ca","Type":"ContainerDied","Data":"b3271718de5d10823c1d8cb58a92daa70441d4c0775319d6b1e4703935350e20"} Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.087046 4869 scope.go:117] "RemoveContainer" containerID="cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.087123 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.090323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" event={"ID":"aad51ba6-f20d-48b1-b456-c7309cc35bbd","Type":"ContainerDied","Data":"e0e031e07f3777bf084c57bd2ad11cca8d11083d95a8cbf49d91d2ce2ed3c4ce"} Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.090473 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.093218 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wrnr2" podUID="7bc37994-d436-4a72-93dd-610683ab871f" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.093526 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-h4pkg" podUID="442e63b3-7f70-4524-b229-aedfb054f395" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.106152 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cm44g" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.124517 4869 scope.go:117] "RemoveContainer" containerID="35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.225459 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.228465 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.250998 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.251433 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.255037 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.536125 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.099163 4869 generic.go:334] "Generic (PLEG): container finished" podID="2c21252d-a76f-437f-8611-f42993137df3" containerID="1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3" exitCode=0 Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.099294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerDied","Data":"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.106821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zqdwm" event={"ID":"f62540d0-1acd-4266-9738-f0fdc72f47d0","Type":"ContainerStarted","Data":"385702c722f118704ef90db2388dc715871a723316fb6a4763da039c9a02db57"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.106882 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.107147 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.107220 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.113056 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.114351 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" event={"ID":"d8c59892-6f39-4bd6-91ba-dc718a31d120","Type":"ContainerStarted","Data":"4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.114415 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" event={"ID":"d8c59892-6f39-4bd6-91ba-dc718a31d120","Type":"ContainerStarted","Data":"455b2abd7e5482aef3332c14262e762b84b5a7304c0eb824ce7c84e17fb72fbf"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.114888 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.116328 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerStarted","Data":"fe48020b66e56af4534dd9618f79104d475525a83e0e2a24ba2717bc0e29db19"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.118741 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerStarted","Data":"26b06ae64272a38d354c10e93d5b78b359d2c42ba63c10fa86dde8816377339c"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.171936 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" podStartSLOduration=3.171885137 podStartE2EDuration="3.171885137s" podCreationTimestamp="2026-02-02 14:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:55.168424869 +0000 UTC m=+216.813061639" watchObservedRunningTime="2026-02-02 14:36:55.171885137 +0000 UTC m=+216.816521917" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.278663 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.345592 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.346658 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.353241 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.353278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.361457 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.456185 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.456305 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.471580 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" path="/var/lib/kubelet/pods/77160080-14bd-4f22-875d-ec53c922a9ca/volumes" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.472314 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" path="/var/lib/kubelet/pods/aad51ba6-f20d-48b1-b456-c7309cc35bbd/volumes" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.557927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.558486 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.558075 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.584218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.663755 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.894201 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 14:36:55 crc kubenswrapper[4869]: W0202 14:36:55.905104 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9a90fc62_12a8_426e_91bb_d995f9407e25.slice/crio-16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322 WatchSource:0}: Error finding container 16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322: Status 404 returned error can't find the container with id 16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322 Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.124882 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a90fc62-12a8-426e-91bb-d995f9407e25","Type":"ContainerStarted","Data":"16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322"} Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.128896 4869 generic.go:334] "Generic (PLEG): container finished" podID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerID="fe48020b66e56af4534dd9618f79104d475525a83e0e2a24ba2717bc0e29db19" exitCode=0 Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.128967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerDied","Data":"fe48020b66e56af4534dd9618f79104d475525a83e0e2a24ba2717bc0e29db19"} Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.131904 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerID="26b06ae64272a38d354c10e93d5b78b359d2c42ba63c10fa86dde8816377339c" exitCode=0 Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.132132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerDied","Data":"26b06ae64272a38d354c10e93d5b78b359d2c42ba63c10fa86dde8816377339c"} Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.134022 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.134105 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.663442 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.805828 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.812489 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.817059 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.817285 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.817480 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.817807 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.822540 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.822793 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.826103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.830818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879188 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879354 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980466 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.981570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.982805 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.983881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.992626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.999686 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:57 crc kubenswrapper[4869]: I0202 14:36:57.140661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a90fc62-12a8-426e-91bb-d995f9407e25","Type":"ContainerStarted","Data":"71dd72aca4bd7f90802f9d58c8a1b3bc8fc0b095c96486bbfbdac6d01e167b38"} Feb 02 14:36:57 crc kubenswrapper[4869]: I0202 14:36:57.176233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:57 crc kubenswrapper[4869]: I0202 14:36:57.390841 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:36:57 crc kubenswrapper[4869]: W0202 14:36:57.405618 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd15d0185_0712_4813_8818_f8ff704f3263.slice/crio-81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4 WatchSource:0}: Error finding container 81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4: Status 404 returned error can't find the container with id 81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4 Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.148341 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585556997c-k595t" event={"ID":"d15d0185-0712-4813-8818-f8ff704f3263","Type":"ContainerStarted","Data":"88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766"} Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.150411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.150525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585556997c-k595t" event={"ID":"d15d0185-0712-4813-8818-f8ff704f3263","Type":"ContainerStarted","Data":"81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4"} Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.152338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerStarted","Data":"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82"} Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.154347 4869 generic.go:334] "Generic (PLEG): container finished" podID="9a90fc62-12a8-426e-91bb-d995f9407e25" containerID="71dd72aca4bd7f90802f9d58c8a1b3bc8fc0b095c96486bbfbdac6d01e167b38" exitCode=0 Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.154393 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a90fc62-12a8-426e-91bb-d995f9407e25","Type":"ContainerDied","Data":"71dd72aca4bd7f90802f9d58c8a1b3bc8fc0b095c96486bbfbdac6d01e167b38"} Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.165310 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.171068 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-585556997c-k595t" podStartSLOduration=6.171042602 podStartE2EDuration="6.171042602s" podCreationTimestamp="2026-02-02 14:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:58.16899121 +0000 UTC m=+219.813627980" watchObservedRunningTime="2026-02-02 14:36:58.171042602 +0000 UTC m=+219.815679372" Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.576613 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.576678 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.576682 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.576743 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.455458 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.474764 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9xjnr" podStartSLOduration=5.264679953 podStartE2EDuration="44.474743523s" podCreationTimestamp="2026-02-02 14:36:15 +0000 UTC" firstStartedPulling="2026-02-02 14:36:17.436155645 +0000 UTC m=+179.080792415" lastFinishedPulling="2026-02-02 14:36:56.646219215 +0000 UTC m=+218.290855985" observedRunningTime="2026-02-02 14:36:58.22827316 +0000 UTC m=+219.872909930" watchObservedRunningTime="2026-02-02 14:36:59.474743523 +0000 UTC m=+221.119380293" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.529194 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") pod \"9a90fc62-12a8-426e-91bb-d995f9407e25\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.529282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") pod \"9a90fc62-12a8-426e-91bb-d995f9407e25\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.529418 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9a90fc62-12a8-426e-91bb-d995f9407e25" (UID: "9a90fc62-12a8-426e-91bb-d995f9407e25"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.529651 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.541117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9a90fc62-12a8-426e-91bb-d995f9407e25" (UID: "9a90fc62-12a8-426e-91bb-d995f9407e25"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.630841 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.169924 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerStarted","Data":"c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222"} Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.172019 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a90fc62-12a8-426e-91bb-d995f9407e25","Type":"ContainerDied","Data":"16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322"} Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.172087 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322" Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.172052 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.194456 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9kt6r" podStartSLOduration=3.769501522 podStartE2EDuration="42.194422912s" podCreationTimestamp="2026-02-02 14:36:18 +0000 UTC" firstStartedPulling="2026-02-02 14:36:20.663552379 +0000 UTC m=+182.308189149" lastFinishedPulling="2026-02-02 14:36:59.088473769 +0000 UTC m=+220.733110539" observedRunningTime="2026-02-02 14:37:00.193706614 +0000 UTC m=+221.838343404" watchObservedRunningTime="2026-02-02 14:37:00.194422912 +0000 UTC m=+221.839059682" Feb 02 14:37:01 crc kubenswrapper[4869]: I0202 14:37:01.179586 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerStarted","Data":"4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149"} Feb 02 14:37:01 crc kubenswrapper[4869]: I0202 14:37:01.202258 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k7wp9" podStartSLOduration=3.819537168 podStartE2EDuration="43.202230871s" podCreationTimestamp="2026-02-02 14:36:18 +0000 UTC" firstStartedPulling="2026-02-02 14:36:20.66765247 +0000 UTC m=+182.312289240" lastFinishedPulling="2026-02-02 14:37:00.050346153 +0000 UTC m=+221.694982943" observedRunningTime="2026-02-02 14:37:01.200752763 +0000 UTC m=+222.845389533" watchObservedRunningTime="2026-02-02 14:37:01.202230871 +0000 UTC m=+222.846867641" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.141748 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 14:37:02 crc kubenswrapper[4869]: E0202 14:37:02.142981 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a90fc62-12a8-426e-91bb-d995f9407e25" containerName="pruner" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.143000 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a90fc62-12a8-426e-91bb-d995f9407e25" containerName="pruner" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.143130 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a90fc62-12a8-426e-91bb-d995f9407e25" containerName="pruner" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.143642 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.150372 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.150728 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.154128 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.170365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.170522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.170570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.298175 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.477444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.950566 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 14:37:03 crc kubenswrapper[4869]: I0202 14:37:03.191268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a","Type":"ContainerStarted","Data":"7adfeb67f0661759b89e7e0b4ac36ee5625d863782a8812d5fd336834d3294f2"} Feb 02 14:37:04 crc kubenswrapper[4869]: I0202 14:37:04.198298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a","Type":"ContainerStarted","Data":"4e29b74a75f39484800450916e4d1c5aab402b78c65dc22472418020d76f3456"} Feb 02 14:37:04 crc kubenswrapper[4869]: I0202 14:37:04.219322 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.219298555 podStartE2EDuration="2.219298555s" podCreationTimestamp="2026-02-02 14:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:04.216629378 +0000 UTC m=+225.861266158" watchObservedRunningTime="2026-02-02 14:37:04.219298555 +0000 UTC m=+225.863935315" Feb 02 14:37:05 crc kubenswrapper[4869]: I0202 14:37:05.917245 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:05 crc kubenswrapper[4869]: I0202 14:37:05.917686 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:06 crc kubenswrapper[4869]: I0202 14:37:06.086109 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:06 crc kubenswrapper[4869]: I0202 14:37:06.231725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerStarted","Data":"0b2c3ac4d08f82b7a5fad7e7219bf53013c9b65776a69054e3a436bb3b5edd60"} Feb 02 14:37:06 crc kubenswrapper[4869]: I0202 14:37:06.274023 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:07 crc kubenswrapper[4869]: I0202 14:37:07.239641 4869 generic.go:334] "Generic (PLEG): container finished" podID="35334030-48c7-4d7e-b202-75371c2c74f0" containerID="0b2c3ac4d08f82b7a5fad7e7219bf53013c9b65776a69054e3a436bb3b5edd60" exitCode=0 Feb 02 14:37:07 crc kubenswrapper[4869]: I0202 14:37:07.240232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerDied","Data":"0b2c3ac4d08f82b7a5fad7e7219bf53013c9b65776a69054e3a436bb3b5edd60"} Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.118200 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.260342 4869 generic.go:334] "Generic (PLEG): container finished" podID="7bc37994-d436-4a72-93dd-610683ab871f" containerID="5adb81683a3033beec8093b130282168a76c6d84454acac94fe5c2d0d6d3406d" exitCode=0 Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.260431 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerDied","Data":"5adb81683a3033beec8093b130282168a76c6d84454acac94fe5c2d0d6d3406d"} Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.264693 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerStarted","Data":"0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6"} Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.264861 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9xjnr" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="registry-server" containerID="cri-o://f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" gracePeriod=2 Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.307068 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h9pgx" podStartSLOduration=2.7362304650000002 podStartE2EDuration="53.307040017s" podCreationTimestamp="2026-02-02 14:36:15 +0000 UTC" firstStartedPulling="2026-02-02 14:36:17.410221965 +0000 UTC m=+179.054858735" lastFinishedPulling="2026-02-02 14:37:07.981031517 +0000 UTC m=+229.625668287" observedRunningTime="2026-02-02 14:37:08.305419217 +0000 UTC m=+229.950055987" watchObservedRunningTime="2026-02-02 14:37:08.307040017 +0000 UTC m=+229.951676787" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.596766 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.687012 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.687066 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.737346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.743548 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.801230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") pod \"2c21252d-a76f-437f-8611-f42993137df3\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.801685 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") pod \"2c21252d-a76f-437f-8611-f42993137df3\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.801769 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") pod \"2c21252d-a76f-437f-8611-f42993137df3\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.802216 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities" (OuterVolumeSpecName: "utilities") pod "2c21252d-a76f-437f-8611-f42993137df3" (UID: "2c21252d-a76f-437f-8611-f42993137df3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.802452 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.808957 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p" (OuterVolumeSpecName: "kube-api-access-x9j9p") pod "2c21252d-a76f-437f-8611-f42993137df3" (UID: "2c21252d-a76f-437f-8611-f42993137df3"). InnerVolumeSpecName "kube-api-access-x9j9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.858353 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c21252d-a76f-437f-8611-f42993137df3" (UID: "2c21252d-a76f-437f-8611-f42993137df3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.904760 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.904815 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.079236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.079374 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.123830 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272720 4869 generic.go:334] "Generic (PLEG): container finished" podID="2c21252d-a76f-437f-8611-f42993137df3" containerID="f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" exitCode=0 Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272775 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerDied","Data":"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82"} Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272851 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272879 4869 scope.go:117] "RemoveContainer" containerID="f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272858 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerDied","Data":"ab3d419e69ab359ef2eb23e842d3d4f04eb05500497bb827ac7bf3115cbf4af4"} Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.294826 4869 scope.go:117] "RemoveContainer" containerID="1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.314978 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.318157 4869 scope.go:117] "RemoveContainer" containerID="f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.318212 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.323180 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.339701 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.346056 4869 scope.go:117] "RemoveContainer" containerID="f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" Feb 02 14:37:09 crc kubenswrapper[4869]: E0202 14:37:09.346832 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82\": container with ID starting with f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82 not found: ID does not exist" containerID="f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.346893 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82"} err="failed to get container status \"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82\": rpc error: code = NotFound desc = could not find container \"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82\": container with ID starting with f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82 not found: ID does not exist" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.346949 4869 scope.go:117] "RemoveContainer" containerID="1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3" Feb 02 14:37:09 crc kubenswrapper[4869]: E0202 14:37:09.347765 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3\": container with ID starting with 1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3 not found: ID does not exist" containerID="1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.347883 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3"} err="failed to get container status \"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3\": rpc error: code = NotFound desc = could not find container \"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3\": container with ID starting with 1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3 not found: ID does not exist" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.348041 4869 scope.go:117] "RemoveContainer" containerID="f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2" Feb 02 14:37:09 crc kubenswrapper[4869]: E0202 14:37:09.348702 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2\": container with ID starting with f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2 not found: ID does not exist" containerID="f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.348732 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2"} err="failed to get container status \"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2\": rpc error: code = NotFound desc = could not find container \"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2\": container with ID starting with f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2 not found: ID does not exist" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.471859 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c21252d-a76f-437f-8611-f42993137df3" path="/var/lib/kubelet/pods/2c21252d-a76f-437f-8611-f42993137df3/volumes" Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.657404 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.658316 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-585556997c-k595t" podUID="d15d0185-0712-4813-8818-f8ff704f3263" containerName="controller-manager" containerID="cri-o://88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766" gracePeriod=30 Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.693083 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.693457 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" containerID="cri-o://4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035" gracePeriod=30 Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.723207 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.724018 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9kt6r" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="registry-server" containerID="cri-o://c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222" gracePeriod=2 Feb 02 14:37:13 crc kubenswrapper[4869]: I0202 14:37:13.299686 4869 generic.go:334] "Generic (PLEG): container finished" podID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerID="4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035" exitCode=0 Feb 02 14:37:13 crc kubenswrapper[4869]: I0202 14:37:13.299776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" event={"ID":"d8c59892-6f39-4bd6-91ba-dc718a31d120","Type":"ContainerDied","Data":"4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035"} Feb 02 14:37:13 crc kubenswrapper[4869]: I0202 14:37:13.303959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerStarted","Data":"1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712"} Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.230662 4869 patch_prober.go:28] interesting pod/route-controller-manager-c89fbc794-wrbkk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.230736 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.315484 4869 generic.go:334] "Generic (PLEG): container finished" podID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerID="c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222" exitCode=0 Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.315580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerDied","Data":"c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222"} Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.317826 4869 generic.go:334] "Generic (PLEG): container finished" podID="d15d0185-0712-4813-8818-f8ff704f3263" containerID="88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766" exitCode=0 Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.318765 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585556997c-k595t" event={"ID":"d15d0185-0712-4813-8818-f8ff704f3263","Type":"ContainerDied","Data":"88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766"} Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.346390 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wrnr2" podStartSLOduration=3.525181413 podStartE2EDuration="57.346363387s" podCreationTimestamp="2026-02-02 14:36:17 +0000 UTC" firstStartedPulling="2026-02-02 14:36:18.533267613 +0000 UTC m=+180.177904383" lastFinishedPulling="2026-02-02 14:37:12.354449587 +0000 UTC m=+233.999086357" observedRunningTime="2026-02-02 14:37:14.340428978 +0000 UTC m=+235.985065748" watchObservedRunningTime="2026-02-02 14:37:14.346363387 +0000 UTC m=+235.991000157" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.561618 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.599533 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:14 crc kubenswrapper[4869]: E0202 14:37:14.599826 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="extract-content" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.599847 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="extract-content" Feb 02 14:37:14 crc kubenswrapper[4869]: E0202 14:37:14.599862 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="extract-utilities" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.599871 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="extract-utilities" Feb 02 14:37:14 crc kubenswrapper[4869]: E0202 14:37:14.599890 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="registry-server" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.599901 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="registry-server" Feb 02 14:37:14 crc kubenswrapper[4869]: E0202 14:37:14.600270 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.600286 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.600426 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.600449 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="registry-server" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.602140 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.619933 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") pod \"d8c59892-6f39-4bd6-91ba-dc718a31d120\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") pod \"d8c59892-6f39-4bd6-91ba-dc718a31d120\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") pod \"d8c59892-6f39-4bd6-91ba-dc718a31d120\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") pod \"d8c59892-6f39-4bd6-91ba-dc718a31d120\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700781 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700928 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.701436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8c59892-6f39-4bd6-91ba-dc718a31d120" (UID: "d8c59892-6f39-4bd6-91ba-dc718a31d120"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.701793 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config" (OuterVolumeSpecName: "config") pod "d8c59892-6f39-4bd6-91ba-dc718a31d120" (UID: "d8c59892-6f39-4bd6-91ba-dc718a31d120"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.706885 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn" (OuterVolumeSpecName: "kube-api-access-6cwmn") pod "d8c59892-6f39-4bd6-91ba-dc718a31d120" (UID: "d8c59892-6f39-4bd6-91ba-dc718a31d120"). InnerVolumeSpecName "kube-api-access-6cwmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.714805 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8c59892-6f39-4bd6-91ba-dc718a31d120" (UID: "d8c59892-6f39-4bd6-91ba-dc718a31d120"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803002 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803082 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803200 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803263 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803276 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803286 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803296 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.804587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.804590 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.812014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.824404 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.934066 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.327376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" event={"ID":"d8c59892-6f39-4bd6-91ba-dc718a31d120","Type":"ContainerDied","Data":"455b2abd7e5482aef3332c14262e762b84b5a7304c0eb824ce7c84e17fb72fbf"} Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.327438 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.327470 4869 scope.go:117] "RemoveContainer" containerID="4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.369796 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.376215 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.470765 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" path="/var/lib/kubelet/pods/d8c59892-6f39-4bd6-91ba-dc718a31d120/volumes" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.696732 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.696877 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.756229 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.884149 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.887531 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.021861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022040 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") pod \"02e119c7-dd08-471f-9800-5bda7b22a6d6\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022194 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022234 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") pod \"02e119c7-dd08-471f-9800-5bda7b22a6d6\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022312 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") pod \"02e119c7-dd08-471f-9800-5bda7b22a6d6\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.023011 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca" (OuterVolumeSpecName: "client-ca") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.024832 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities" (OuterVolumeSpecName: "utilities") pod "02e119c7-dd08-471f-9800-5bda7b22a6d6" (UID: "02e119c7-dd08-471f-9800-5bda7b22a6d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.025563 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config" (OuterVolumeSpecName: "config") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.026874 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.028704 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj" (OuterVolumeSpecName: "kube-api-access-ftcdj") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "kube-api-access-ftcdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.028885 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd" (OuterVolumeSpecName: "kube-api-access-cqjnd") pod "02e119c7-dd08-471f-9800-5bda7b22a6d6" (UID: "02e119c7-dd08-471f-9800-5bda7b22a6d6"). InnerVolumeSpecName "kube-api-access-cqjnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.031311 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123441 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123496 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123507 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123516 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123527 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123539 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123551 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.161579 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02e119c7-dd08-471f-9800-5bda7b22a6d6" (UID: "02e119c7-dd08-471f-9800-5bda7b22a6d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.225290 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.350117 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.350111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerDied","Data":"9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883"} Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.350311 4869 scope.go:117] "RemoveContainer" containerID="c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.353055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585556997c-k595t" event={"ID":"d15d0185-0712-4813-8818-f8ff704f3263","Type":"ContainerDied","Data":"81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4"} Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.353061 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.416793 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.420790 4869 scope.go:117] "RemoveContainer" containerID="fe48020b66e56af4534dd9618f79104d475525a83e0e2a24ba2717bc0e29db19" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.437810 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.445894 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.448711 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.452280 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.486549 4869 scope.go:117] "RemoveContainer" containerID="5761dc2d2fafda3cf6b457c2de25d204c006ac8d85953364b9966521a437f222" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.504727 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.522957 4869 scope.go:117] "RemoveContainer" containerID="88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766" Feb 02 14:37:16 crc kubenswrapper[4869]: W0202 14:37:16.534561 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86bc8607_01df_4cb4_b6bb_cc2e9d5e9c21.slice/crio-cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7 WatchSource:0}: Error finding container cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7: Status 404 returned error can't find the container with id cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7 Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.816687 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:16 crc kubenswrapper[4869]: E0202 14:37:16.817525 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="registry-server" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817545 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="registry-server" Feb 02 14:37:16 crc kubenswrapper[4869]: E0202 14:37:16.817561 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15d0185-0712-4813-8818-f8ff704f3263" containerName="controller-manager" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817569 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15d0185-0712-4813-8818-f8ff704f3263" containerName="controller-manager" Feb 02 14:37:16 crc kubenswrapper[4869]: E0202 14:37:16.817584 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="extract-content" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="extract-content" Feb 02 14:37:16 crc kubenswrapper[4869]: E0202 14:37:16.817611 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="extract-utilities" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817622 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="extract-utilities" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817737 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15d0185-0712-4813-8818-f8ff704f3263" containerName="controller-manager" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817755 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="registry-server" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.818289 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.820752 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.822989 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.823112 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.823499 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.823651 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.865940 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.869611 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.870958 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968214 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968296 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968397 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069196 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069256 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.070767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.070816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.071468 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.085977 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.093846 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.313292 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.365586 4869 generic.go:334] "Generic (PLEG): container finished" podID="e56fa221-6e79-4c96-be0a-17db4803a127" containerID="1d5262628061708d6b461198d2d084d86b80216bf8b77ec9e9e6c482080d5b5e" exitCode=0 Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.365748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerDied","Data":"1d5262628061708d6b461198d2d084d86b80216bf8b77ec9e9e6c482080d5b5e"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.374349 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" event={"ID":"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21","Type":"ContainerStarted","Data":"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.374408 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" event={"ID":"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21","Type":"ContainerStarted","Data":"cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.375597 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.388979 4869 generic.go:334] "Generic (PLEG): container finished" podID="20990512-5147-4de8-95e0-f40e2156f395" containerID="7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126" exitCode=0 Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.389057 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerDied","Data":"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.401503 4869 generic.go:334] "Generic (PLEG): container finished" podID="442e63b3-7f70-4524-b229-aedfb054f395" containerID="435266a1fb45df9d425b2515a2f4a59487d90de763976fcfaaabab9e29fcb4cb" exitCode=0 Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.401583 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerDied","Data":"435266a1fb45df9d425b2515a2f4a59487d90de763976fcfaaabab9e29fcb4cb"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.407788 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" podStartSLOduration=5.4077696060000005 podStartE2EDuration="5.407769606s" podCreationTimestamp="2026-02-02 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:17.405690053 +0000 UTC m=+239.050326823" watchObservedRunningTime="2026-02-02 14:37:17.407769606 +0000 UTC m=+239.052406376" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.470995 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" path="/var/lib/kubelet/pods/02e119c7-dd08-471f-9800-5bda7b22a6d6/volumes" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.472286 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15d0185-0712-4813-8818-f8ff704f3263" path="/var/lib/kubelet/pods/d15d0185-0712-4813-8818-f8ff704f3263/volumes" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.606039 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.661962 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.662034 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.716994 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.761249 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:17 crc kubenswrapper[4869]: W0202 14:37:17.772411 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b312c5_c580_4ea2_83d7_5217f24da91f.slice/crio-98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3 WatchSource:0}: Error finding container 98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3: Status 404 returned error can't find the container with id 98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3 Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.412012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerStarted","Data":"5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f"} Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.415789 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerStarted","Data":"797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b"} Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.417471 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" event={"ID":"f0b312c5-c580-4ea2-83d7-5217f24da91f","Type":"ContainerStarted","Data":"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d"} Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.417522 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" event={"ID":"f0b312c5-c580-4ea2-83d7-5217f24da91f","Type":"ContainerStarted","Data":"98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3"} Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.418922 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.426073 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.434830 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h4pkg" podStartSLOduration=2.973934785 podStartE2EDuration="1m1.434807257s" podCreationTimestamp="2026-02-02 14:36:17 +0000 UTC" firstStartedPulling="2026-02-02 14:36:19.624107056 +0000 UTC m=+181.268743826" lastFinishedPulling="2026-02-02 14:37:18.084979518 +0000 UTC m=+239.729616298" observedRunningTime="2026-02-02 14:37:18.430307884 +0000 UTC m=+240.074944654" watchObservedRunningTime="2026-02-02 14:37:18.434807257 +0000 UTC m=+240.079444027" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.460807 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cm44g" podStartSLOduration=2.665096812 podStartE2EDuration="1m3.4607766s" podCreationTimestamp="2026-02-02 14:36:15 +0000 UTC" firstStartedPulling="2026-02-02 14:36:17.402089094 +0000 UTC m=+179.046725864" lastFinishedPulling="2026-02-02 14:37:18.197768882 +0000 UTC m=+239.842405652" observedRunningTime="2026-02-02 14:37:18.458283647 +0000 UTC m=+240.102920427" watchObservedRunningTime="2026-02-02 14:37:18.4607766 +0000 UTC m=+240.105413370" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.472666 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.488301 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" podStartSLOduration=6.488277341 podStartE2EDuration="6.488277341s" podCreationTimestamp="2026-02-02 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:18.483591413 +0000 UTC m=+240.128228183" watchObservedRunningTime="2026-02-02 14:37:18.488277341 +0000 UTC m=+240.132914111" Feb 02 14:37:20 crc kubenswrapper[4869]: I0202 14:37:20.430812 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerStarted","Data":"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c"} Feb 02 14:37:20 crc kubenswrapper[4869]: I0202 14:37:20.448975 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6crm" podStartSLOduration=3.782908403 podStartE2EDuration="1m5.448954017s" podCreationTimestamp="2026-02-02 14:36:15 +0000 UTC" firstStartedPulling="2026-02-02 14:36:17.424705252 +0000 UTC m=+179.069342022" lastFinishedPulling="2026-02-02 14:37:19.090750866 +0000 UTC m=+240.735387636" observedRunningTime="2026-02-02 14:37:20.447455259 +0000 UTC m=+242.092092029" watchObservedRunningTime="2026-02-02 14:37:20.448954017 +0000 UTC m=+242.093590787" Feb 02 14:37:21 crc kubenswrapper[4869]: I0202 14:37:21.711043 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" containerID="cri-o://4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4" gracePeriod=15 Feb 02 14:37:22 crc kubenswrapper[4869]: I0202 14:37:22.443834 4869 generic.go:334] "Generic (PLEG): container finished" podID="992c2b96-5783-4865-a47d-167caf91e241" containerID="4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4" exitCode=0 Feb 02 14:37:22 crc kubenswrapper[4869]: I0202 14:37:22.443894 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" event={"ID":"992c2b96-5783-4865-a47d-167caf91e241","Type":"ContainerDied","Data":"4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4"} Feb 02 14:37:22 crc kubenswrapper[4869]: I0202 14:37:22.906717 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071795 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071816 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071836 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071872 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071923 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071953 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071977 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072011 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072113 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072151 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.073744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.073947 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.076443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.076841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.079140 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.082355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.083125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.083396 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.087596 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6" (OuterVolumeSpecName: "kube-api-access-dfqt6") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "kube-api-access-dfqt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.088364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.089980 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.090441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.091355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.099351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.174805 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175373 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175395 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175410 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175422 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175433 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175447 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175461 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175472 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175491 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175501 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175514 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175524 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175535 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.451127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" event={"ID":"992c2b96-5783-4865-a47d-167caf91e241","Type":"ContainerDied","Data":"92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365"} Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.451193 4869 scope.go:117] "RemoveContainer" containerID="4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.451210 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.486620 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.494453 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.445889 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.446367 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.470777 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992c2b96-5783-4865-a47d-167caf91e241" path="/var/lib/kubelet/pods/992c2b96-5783-4865-a47d-167caf91e241/volumes" Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.493139 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.544699 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:37:26 crc kubenswrapper[4869]: I0202 14:37:26.092070 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:26 crc kubenswrapper[4869]: I0202 14:37:26.092144 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:26 crc kubenswrapper[4869]: I0202 14:37:26.140210 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:26 crc kubenswrapper[4869]: I0202 14:37:26.515665 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.077753 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.078164 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.117921 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.520636 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.521278 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cm44g" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="registry-server" containerID="cri-o://797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b" gracePeriod=2 Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.532038 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.118899 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.514546 4869 generic.go:334] "Generic (PLEG): container finished" podID="e56fa221-6e79-4c96-be0a-17db4803a127" containerID="797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b" exitCode=0 Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.514688 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerDied","Data":"797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b"} Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.566275 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.683129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") pod \"e56fa221-6e79-4c96-be0a-17db4803a127\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.683277 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") pod \"e56fa221-6e79-4c96-be0a-17db4803a127\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.683339 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") pod \"e56fa221-6e79-4c96-be0a-17db4803a127\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.687494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities" (OuterVolumeSpecName: "utilities") pod "e56fa221-6e79-4c96-be0a-17db4803a127" (UID: "e56fa221-6e79-4c96-be0a-17db4803a127"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.693420 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744" (OuterVolumeSpecName: "kube-api-access-9l744") pod "e56fa221-6e79-4c96-be0a-17db4803a127" (UID: "e56fa221-6e79-4c96-be0a-17db4803a127"). InnerVolumeSpecName "kube-api-access-9l744". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.741013 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e56fa221-6e79-4c96-be0a-17db4803a127" (UID: "e56fa221-6e79-4c96-be0a-17db4803a127"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.785121 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.785176 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.785195 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.526592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerDied","Data":"3fdc2755e50c40ab06f7338836dcc4d68f5937d9bf9ebd941d8d98f6a64dcd17"} Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.526660 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.526705 4869 scope.go:117] "RemoveContainer" containerID="797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b" Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.526793 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h4pkg" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="registry-server" containerID="cri-o://5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f" gracePeriod=2 Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.557064 4869 scope.go:117] "RemoveContainer" containerID="1d5262628061708d6b461198d2d084d86b80216bf8b77ec9e9e6c482080d5b5e" Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.572450 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.579240 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.588986 4869 scope.go:117] "RemoveContainer" containerID="b2450dd93a7c78de896bbf627e97911c1993d1380dd59859505aa8d294fc3f44" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.469948 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" path="/var/lib/kubelet/pods/e56fa221-6e79-4c96-be0a-17db4803a127/volumes" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.537486 4869 generic.go:334] "Generic (PLEG): container finished" podID="442e63b3-7f70-4524-b229-aedfb054f395" containerID="5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f" exitCode=0 Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.537558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerDied","Data":"5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f"} Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.620140 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") pod \"442e63b3-7f70-4524-b229-aedfb054f395\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811532 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") pod \"442e63b3-7f70-4524-b229-aedfb054f395\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811641 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") pod \"442e63b3-7f70-4524-b229-aedfb054f395\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811866 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities" (OuterVolumeSpecName: "utilities") pod "442e63b3-7f70-4524-b229-aedfb054f395" (UID: "442e63b3-7f70-4524-b229-aedfb054f395"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811999 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.817385 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5" (OuterVolumeSpecName: "kube-api-access-vlvm5") pod "442e63b3-7f70-4524-b229-aedfb054f395" (UID: "442e63b3-7f70-4524-b229-aedfb054f395"). InnerVolumeSpecName "kube-api-access-vlvm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.837819 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "442e63b3-7f70-4524-b229-aedfb054f395" (UID: "442e63b3-7f70-4524-b229-aedfb054f395"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.913849 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.913897 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.546647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerDied","Data":"1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2"} Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.546689 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.546717 4869 scope.go:117] "RemoveContainer" containerID="5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.578363 4869 scope.go:117] "RemoveContainer" containerID="435266a1fb45df9d425b2515a2f4a59487d90de763976fcfaaabab9e29fcb4cb" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.595756 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.598846 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.615433 4869 scope.go:117] "RemoveContainer" containerID="9fde05ff8b3ab7b33bf7fd64de1786d6d6c5b221f2074b9b8d881ce96c0861b1" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.668207 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.668507 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerName="controller-manager" containerID="cri-o://b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" gracePeriod=30 Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.765658 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.766013 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerName="route-controller-manager" containerID="cri-o://67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" gracePeriod=30 Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827586 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-69btm"] Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827870 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827884 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827895 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827901 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827932 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="extract-content" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827938 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="extract-content" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827952 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="extract-utilities" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827958 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="extract-utilities" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827966 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="extract-content" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827972 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="extract-content" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827981 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="extract-utilities" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827987 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="extract-utilities" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827997 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828003 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828092 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828105 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828121 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828592 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.833535 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.833621 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836084 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836143 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836300 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836351 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836475 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836533 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836585 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836633 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836819 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.837155 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.843264 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.847073 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-69btm"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.849853 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.859353 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f717d6c0-e841-450a-90b8-e651ed89f315-audit-dir\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929235 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929324 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929367 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929441 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929621 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929652 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-audit-policies\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gjzm\" (UniqueName: \"kubernetes.io/projected/f717d6c0-e841-450a-90b8-e651ed89f315-kube-api-access-9gjzm\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929870 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.030715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031294 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031368 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-audit-policies\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031410 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gjzm\" (UniqueName: \"kubernetes.io/projected/f717d6c0-e841-450a-90b8-e651ed89f315-kube-api-access-9gjzm\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031442 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f717d6c0-e841-450a-90b8-e651ed89f315-audit-dir\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031495 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031531 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.032897 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-audit-policies\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.034207 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.034823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.035049 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f717d6c0-e841-450a-90b8-e651ed89f315-audit-dir\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.037660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.038944 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.039038 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.040805 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.040948 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.041385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.042094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.043853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.054066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.057668 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gjzm\" (UniqueName: \"kubernetes.io/projected/f717d6c0-e841-450a-90b8-e651ed89f315-kube-api-access-9gjzm\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.208145 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.215290 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.300249 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.334243 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") pod \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.334377 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") pod \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.334435 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") pod \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.334481 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") pod \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.335441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca" (OuterVolumeSpecName: "client-ca") pod "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" (UID: "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.335767 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config" (OuterVolumeSpecName: "config") pod "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" (UID: "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.338208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" (UID: "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.338248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679" (OuterVolumeSpecName: "kube-api-access-95679") pod "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" (UID: "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21"). InnerVolumeSpecName "kube-api-access-95679". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.435862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436032 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436137 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436172 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436208 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436592 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436613 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436626 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436638 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.437621 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.437646 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca" (OuterVolumeSpecName: "client-ca") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.437864 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config" (OuterVolumeSpecName: "config") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.441046 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.441095 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h" (OuterVolumeSpecName: "kube-api-access-7jc2h") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "kube-api-access-7jc2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.477703 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442e63b3-7f70-4524-b229-aedfb054f395" path="/var/lib/kubelet/pods/442e63b3-7f70-4524-b229-aedfb054f395/volumes" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538856 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538948 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538967 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538978 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538991 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562719 4869 generic.go:334] "Generic (PLEG): container finished" podID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerID="67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" exitCode=0 Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562790 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" event={"ID":"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21","Type":"ContainerDied","Data":"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae"} Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" event={"ID":"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21","Type":"ContainerDied","Data":"cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7"} Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562840 4869 scope.go:117] "RemoveContainer" containerID="67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562959 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.566819 4869 generic.go:334] "Generic (PLEG): container finished" podID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerID="b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" exitCode=0 Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.566894 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.566889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" event={"ID":"f0b312c5-c580-4ea2-83d7-5217f24da91f","Type":"ContainerDied","Data":"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d"} Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.567224 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" event={"ID":"f0b312c5-c580-4ea2-83d7-5217f24da91f","Type":"ContainerDied","Data":"98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3"} Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.584075 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.596555 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.602888 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.606943 4869 scope.go:117] "RemoveContainer" containerID="67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" Feb 02 14:37:33 crc kubenswrapper[4869]: E0202 14:37:33.607582 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae\": container with ID starting with 67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae not found: ID does not exist" containerID="67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.607656 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae"} err="failed to get container status \"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae\": rpc error: code = NotFound desc = could not find container \"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae\": container with ID starting with 67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae not found: ID does not exist" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.607689 4869 scope.go:117] "RemoveContainer" containerID="b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.608321 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.639234 4869 scope.go:117] "RemoveContainer" containerID="b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" Feb 02 14:37:33 crc kubenswrapper[4869]: E0202 14:37:33.640019 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d\": container with ID starting with b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d not found: ID does not exist" containerID="b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.640055 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d"} err="failed to get container status \"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d\": rpc error: code = NotFound desc = could not find container \"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d\": container with ID starting with b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d not found: ID does not exist" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.671809 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-69btm"] Feb 02 14:37:33 crc kubenswrapper[4869]: W0202 14:37:33.676413 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf717d6c0_e841_450a_90b8_e651ed89f315.slice/crio-4f9a06206efe9ff0a29dfaec184457a51184170a2123e2d63f42b1b62bbd36c4 WatchSource:0}: Error finding container 4f9a06206efe9ff0a29dfaec184457a51184170a2123e2d63f42b1b62bbd36c4: Status 404 returned error can't find the container with id 4f9a06206efe9ff0a29dfaec184457a51184170a2123e2d63f42b1b62bbd36c4 Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.830040 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv"] Feb 02 14:37:33 crc kubenswrapper[4869]: E0202 14:37:33.830474 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerName="route-controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.830521 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerName="route-controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: E0202 14:37:33.830541 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerName="controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.830549 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerName="controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.830718 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerName="controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.831425 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerName="route-controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.836442 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.836444 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.838103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.838896 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.838923 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.841155 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.841519 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.841584 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.841859 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.842055 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.842202 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.842276 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.842293 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.844008 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.845681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b7910bb-92fa-4254-9635-b376bd2e3b5b-serving-cert\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.845729 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zv56\" (UniqueName: \"kubernetes.io/projected/9b7910bb-92fa-4254-9635-b376bd2e3b5b-kube-api-access-7zv56\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.848845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-config\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.849025 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-proxy-ca-bundles\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.849147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6265c823-67e0-40d0-9a85-d57db97e2513-serving-cert\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.849418 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.850645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xrb\" (UniqueName: \"kubernetes.io/projected/6265c823-67e0-40d0-9a85-d57db97e2513-kube-api-access-t2xrb\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.850728 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-client-ca\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.850761 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-config\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.850844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-client-ca\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.867040 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.889212 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.893871 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.953454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-client-ca\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.953837 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b7910bb-92fa-4254-9635-b376bd2e3b5b-serving-cert\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.953971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zv56\" (UniqueName: \"kubernetes.io/projected/9b7910bb-92fa-4254-9635-b376bd2e3b5b-kube-api-access-7zv56\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-config\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954266 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-proxy-ca-bundles\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6265c823-67e0-40d0-9a85-d57db97e2513-serving-cert\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954569 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xrb\" (UniqueName: \"kubernetes.io/projected/6265c823-67e0-40d0-9a85-d57db97e2513-kube-api-access-t2xrb\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-client-ca\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954797 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-config\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-client-ca\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.955817 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-config\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.956155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-client-ca\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.956258 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-proxy-ca-bundles\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.956996 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-config\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.966213 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b7910bb-92fa-4254-9635-b376bd2e3b5b-serving-cert\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.973710 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6265c823-67e0-40d0-9a85-d57db97e2513-serving-cert\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.978747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xrb\" (UniqueName: \"kubernetes.io/projected/6265c823-67e0-40d0-9a85-d57db97e2513-kube-api-access-t2xrb\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.979930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zv56\" (UniqueName: \"kubernetes.io/projected/9b7910bb-92fa-4254-9635-b376bd2e3b5b-kube-api-access-7zv56\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.171171 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.196831 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:34 crc kubenswrapper[4869]: W0202 14:37:34.505315 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6265c823_67e0_40d0_9a85_d57db97e2513.slice/crio-ea29608157630d501a847268504a1861fb0a895ca48f563074d8d69cd77382c2 WatchSource:0}: Error finding container ea29608157630d501a847268504a1861fb0a895ca48f563074d8d69cd77382c2: Status 404 returned error can't find the container with id ea29608157630d501a847268504a1861fb0a895ca48f563074d8d69cd77382c2 Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.506217 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn"] Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.580684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" event={"ID":"f717d6c0-e841-450a-90b8-e651ed89f315","Type":"ContainerStarted","Data":"004aa9e20d90c52c532959af386df200cddc9e51d9026630027395f5501fbe58"} Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.580735 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" event={"ID":"f717d6c0-e841-450a-90b8-e651ed89f315","Type":"ContainerStarted","Data":"4f9a06206efe9ff0a29dfaec184457a51184170a2123e2d63f42b1b62bbd36c4"} Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.582160 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.587735 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" event={"ID":"6265c823-67e0-40d0-9a85-d57db97e2513","Type":"ContainerStarted","Data":"ea29608157630d501a847268504a1861fb0a895ca48f563074d8d69cd77382c2"} Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.587870 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.643778 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" podStartSLOduration=38.643754039 podStartE2EDuration="38.643754039s" podCreationTimestamp="2026-02-02 14:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:34.618340301 +0000 UTC m=+256.262977071" watchObservedRunningTime="2026-02-02 14:37:34.643754039 +0000 UTC m=+256.288390809" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.654649 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv"] Feb 02 14:37:34 crc kubenswrapper[4869]: W0202 14:37:34.665336 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7910bb_92fa_4254_9635_b376bd2e3b5b.slice/crio-ffdd039c32f65f941efa3b8430c2f46543aaf858ca17099ec14e50cce6e7679b WatchSource:0}: Error finding container ffdd039c32f65f941efa3b8430c2f46543aaf858ca17099ec14e50cce6e7679b: Status 404 returned error can't find the container with id ffdd039c32f65f941efa3b8430c2f46543aaf858ca17099ec14e50cce6e7679b Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.473464 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" path="/var/lib/kubelet/pods/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21/volumes" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.474843 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" path="/var/lib/kubelet/pods/f0b312c5-c580-4ea2-83d7-5217f24da91f/volumes" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.594850 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" event={"ID":"6265c823-67e0-40d0-9a85-d57db97e2513","Type":"ContainerStarted","Data":"256411f04db530b62c380608d97946b9b623805f96c4af44692a56c21b7ceb7d"} Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.596730 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.598789 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" event={"ID":"9b7910bb-92fa-4254-9635-b376bd2e3b5b","Type":"ContainerStarted","Data":"904a9654994a6deea97a335762a1e162586410d8a11a6bee3309d47260b5ad34"} Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.598821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" event={"ID":"9b7910bb-92fa-4254-9635-b376bd2e3b5b","Type":"ContainerStarted","Data":"ffdd039c32f65f941efa3b8430c2f46543aaf858ca17099ec14e50cce6e7679b"} Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.599192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.604473 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.605299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.643228 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" podStartSLOduration=3.643205648 podStartE2EDuration="3.643205648s" podCreationTimestamp="2026-02-02 14:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:35.619783439 +0000 UTC m=+257.264420219" watchObservedRunningTime="2026-02-02 14:37:35.643205648 +0000 UTC m=+257.287842418" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.644729 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" podStartSLOduration=3.644723656 podStartE2EDuration="3.644723656s" podCreationTimestamp="2026-02-02 14:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:35.64252833 +0000 UTC m=+257.287165100" watchObservedRunningTime="2026-02-02 14:37:35.644723656 +0000 UTC m=+257.289360426" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.190162 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.191692 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193002 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193314 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193361 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193444 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193518 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193585 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.194994 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195460 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195480 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195492 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195509 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195516 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195525 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195532 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195548 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195555 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195567 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195574 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195587 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197040 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197066 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197079 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197087 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197100 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197108 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197120 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.197881 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197971 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.242308 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288504 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288537 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288565 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288608 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288633 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390636 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390841 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390929 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391104 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391156 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391212 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391261 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.543399 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:42 crc kubenswrapper[4869]: W0202 14:37:42.292399 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e55d596794aaa8f19a4f5d9b34185a347aecefa0e6807396866bea39d6f03efb WatchSource:0}: Error finding container e55d596794aaa8f19a4f5d9b34185a347aecefa0e6807396866bea39d6f03efb: Status 404 returned error can't find the container with id e55d596794aaa8f19a4f5d9b34185a347aecefa0e6807396866bea39d6f03efb Feb 02 14:37:42 crc kubenswrapper[4869]: E0202 14:37:42.296533 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.82:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189074c97d476a90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 14:37:42.295685776 +0000 UTC m=+263.940322546,LastTimestamp:2026-02-02 14:37:42.295685776 +0000 UTC m=+263.940322546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.644143 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.646468 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647158 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" exitCode=0 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647188 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" exitCode=0 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647195 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" exitCode=0 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647203 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" exitCode=2 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647266 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.649455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2"} Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.649495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e55d596794aaa8f19a4f5d9b34185a347aecefa0e6807396866bea39d6f03efb"} Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.651121 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.652272 4869 generic.go:334] "Generic (PLEG): container finished" podID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" containerID="4e29b74a75f39484800450916e4d1c5aab402b78c65dc22472418020d76f3456" exitCode=0 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.652314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a","Type":"ContainerDied","Data":"4e29b74a75f39484800450916e4d1c5aab402b78c65dc22472418020d76f3456"} Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.653298 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.653545 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:43 crc kubenswrapper[4869]: I0202 14:37:43.663404 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.045709 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.046761 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.047168 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.135940 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") pod \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.135990 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") pod \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") pod \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136069 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" (UID: "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock" (OuterVolumeSpecName: "var-lock") pod "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" (UID: "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136505 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136520 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.145964 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" (UID: "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.237867 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.585161 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.586879 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.587590 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.588120 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.588425 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.642882 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643045 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643059 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643136 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643170 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643278 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643502 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643519 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643528 4869 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.676130 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.677146 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" exitCode=0 Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.677223 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.677232 4869 scope.go:117] "RemoveContainer" containerID="1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.679106 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a","Type":"ContainerDied","Data":"7adfeb67f0661759b89e7e0b4ac36ee5625d863782a8812d5fd336834d3294f2"} Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.679139 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adfeb67f0661759b89e7e0b4ac36ee5625d863782a8812d5fd336834d3294f2" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.679170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.699550 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.700164 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.700513 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.700842 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.701054 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.701210 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.723131 4869 scope.go:117] "RemoveContainer" containerID="bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.743617 4869 scope.go:117] "RemoveContainer" containerID="f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.764130 4869 scope.go:117] "RemoveContainer" containerID="096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.789283 4869 scope.go:117] "RemoveContainer" containerID="6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.806699 4869 scope.go:117] "RemoveContainer" containerID="1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.826310 4869 scope.go:117] "RemoveContainer" containerID="1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.826846 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\": container with ID starting with 1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5 not found: ID does not exist" containerID="1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.826899 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5"} err="failed to get container status \"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\": rpc error: code = NotFound desc = could not find container \"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\": container with ID starting with 1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5 not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.826978 4869 scope.go:117] "RemoveContainer" containerID="bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.828719 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\": container with ID starting with bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213 not found: ID does not exist" containerID="bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.828756 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213"} err="failed to get container status \"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\": rpc error: code = NotFound desc = could not find container \"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\": container with ID starting with bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213 not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.828789 4869 scope.go:117] "RemoveContainer" containerID="f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.829182 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\": container with ID starting with f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f not found: ID does not exist" containerID="f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.829208 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f"} err="failed to get container status \"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\": rpc error: code = NotFound desc = could not find container \"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\": container with ID starting with f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.829225 4869 scope.go:117] "RemoveContainer" containerID="096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.829508 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\": container with ID starting with 096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649 not found: ID does not exist" containerID="096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.829526 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649"} err="failed to get container status \"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\": rpc error: code = NotFound desc = could not find container \"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\": container with ID starting with 096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649 not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.829540 4869 scope.go:117] "RemoveContainer" containerID="6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.830013 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\": container with ID starting with 6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5 not found: ID does not exist" containerID="6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.830044 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5"} err="failed to get container status \"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\": rpc error: code = NotFound desc = could not find container \"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\": container with ID starting with 6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5 not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.830061 4869 scope.go:117] "RemoveContainer" containerID="1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.830358 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\": container with ID starting with 1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37 not found: ID does not exist" containerID="1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.830379 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37"} err="failed to get container status \"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\": rpc error: code = NotFound desc = could not find container \"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\": container with ID starting with 1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37 not found: ID does not exist" Feb 02 14:37:45 crc kubenswrapper[4869]: I0202 14:37:45.468936 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.491635 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.492191 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.492800 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.493224 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.493612 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: I0202 14:37:47.493649 4869 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.493893 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="200ms" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.695404 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="400ms" Feb 02 14:37:48 crc kubenswrapper[4869]: E0202 14:37:48.096443 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="800ms" Feb 02 14:37:48 crc kubenswrapper[4869]: E0202 14:37:48.263067 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.82:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189074c97d476a90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 14:37:42.295685776 +0000 UTC m=+263.940322546,LastTimestamp:2026-02-02 14:37:42.295685776 +0000 UTC m=+263.940322546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 14:37:48 crc kubenswrapper[4869]: E0202 14:37:48.897986 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="1.6s" Feb 02 14:37:49 crc kubenswrapper[4869]: I0202 14:37:49.465454 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:49 crc kubenswrapper[4869]: I0202 14:37:49.466102 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:50 crc kubenswrapper[4869]: E0202 14:37:50.499452 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="3.2s" Feb 02 14:37:52 crc kubenswrapper[4869]: I0202 14:37:52.558412 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:37:52 crc kubenswrapper[4869]: I0202 14:37:52.558996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:37:52 crc kubenswrapper[4869]: I0202 14:37:52.559072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:37:52 crc kubenswrapper[4869]: I0202 14:37:52.559202 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:37:52 crc kubenswrapper[4869]: W0202 14:37:52.559687 4869 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:52 crc kubenswrapper[4869]: E0202 14:37:52.559785 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:52 crc kubenswrapper[4869]: W0202 14:37:52.559986 4869 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27217": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:52 crc kubenswrapper[4869]: E0202 14:37:52.560129 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27217\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:52 crc kubenswrapper[4869]: W0202 14:37:52.559687 4869 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:52 crc kubenswrapper[4869]: E0202 14:37:52.560231 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559776 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559853 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559863 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559998 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:39:55.559964755 +0000 UTC m=+397.204601525 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.560197 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:39:55.56017489 +0000 UTC m=+397.204811660 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559996 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: W0202 14:37:53.561050 4869 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.561139 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.700238 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="6.4s" Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561138 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561275 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561186 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561395 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561396 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:39:56.56136957 +0000 UTC m=+398.206006340 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561517 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:39:56.561486642 +0000 UTC m=+398.206123412 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: W0202 14:37:54.619435 4869 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.619555 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.779785 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.779847 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53" exitCode=1 Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.779886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53"} Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.781350 4869 scope.go:117] "RemoveContainer" containerID="24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.781974 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.782508 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.782748 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:54 crc kubenswrapper[4869]: W0202 14:37:54.976256 4869 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27217": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.976704 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27217\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:55 crc kubenswrapper[4869]: W0202 14:37:55.241231 4869 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:55 crc kubenswrapper[4869]: E0202 14:37:55.241345 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.794042 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.794148 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48"} Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.795446 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.796296 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.796639 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:55 crc kubenswrapper[4869]: W0202 14:37:55.804722 4869 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:55 crc kubenswrapper[4869]: E0202 14:37:55.804813 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.128120 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.180465 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.180857 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.180984 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.462046 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.464930 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.465572 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.465895 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.479655 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.479726 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:56 crc kubenswrapper[4869]: E0202 14:37:56.480380 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.481453 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.803198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6761a4a5165ae6cb7a772c44b1665b6b7ebe7de99f1094f5adde7248288ac27f"} Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.813048 4869 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="ed0d9d90c2e5bb55df0d6a404530efce84c940be6299ebe61ba479a34e5bf850" exitCode=0 Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.813167 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"ed0d9d90c2e5bb55df0d6a404530efce84c940be6299ebe61ba479a34e5bf850"} Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.813693 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.813716 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:57 crc kubenswrapper[4869]: E0202 14:37:57.814344 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.814349 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.814953 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.815294 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:58 crc kubenswrapper[4869]: I0202 14:37:58.821721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cb5c56a5e047124905812b7a14b8a34862cb45bda2a033dcd929ee28793d1f98"} Feb 02 14:37:58 crc kubenswrapper[4869]: I0202 14:37:58.822128 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"79ea061f5625451f1692831d2c2774a2c11d2f8e0feb297db2721ec6e1a18cb1"} Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.830762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3ed615df896343e6330ac413783bf3ec5e1f88d8297b8815bd0be595dc066dc4"} Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831309 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0ef39b07741ca7c20804fdc8fe96e0862226159e2aaebe3d16ce796c258f799c"} Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831327 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"244b6f19fb1e568ae7381f1ff6c9edef2df1bb485b3508f8ae05d93afe8ad476"} Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831343 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831180 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831363 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:01 crc kubenswrapper[4869]: I0202 14:38:01.317116 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 14:38:01 crc kubenswrapper[4869]: I0202 14:38:01.482291 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:01 crc kubenswrapper[4869]: I0202 14:38:01.482462 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:01 crc kubenswrapper[4869]: I0202 14:38:01.490763 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.847723 4869 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.848947 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.868803 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.869154 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.873063 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.875514 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c96bdd8a-fdad-42aa-baba-291b9cd0c8d3" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.924449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.976948 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 14:38:05 crc kubenswrapper[4869]: I0202 14:38:05.875883 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:05 crc kubenswrapper[4869]: I0202 14:38:05.875939 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:06 crc kubenswrapper[4869]: I0202 14:38:06.181600 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 14:38:06 crc kubenswrapper[4869]: I0202 14:38:06.181684 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 14:38:08 crc kubenswrapper[4869]: E0202 14:38:08.493322 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:38:09 crc kubenswrapper[4869]: E0202 14:38:09.482873 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:38:09 crc kubenswrapper[4869]: I0202 14:38:09.486356 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c96bdd8a-fdad-42aa-baba-291b9cd0c8d3" Feb 02 14:38:09 crc kubenswrapper[4869]: E0202 14:38:09.493794 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:38:14 crc kubenswrapper[4869]: I0202 14:38:14.741104 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:38:14 crc kubenswrapper[4869]: I0202 14:38:14.802332 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 14:38:14 crc kubenswrapper[4869]: I0202 14:38:14.877719 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.004017 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.326348 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.476838 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.643613 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.799412 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.852470 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.890077 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.954446 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.112188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.156963 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.181168 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.181546 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.181770 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.183755 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.183936 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48" gracePeriod=30 Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.370171 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.421410 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.448833 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.674337 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.711674 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.837296 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.976630 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.142640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.166250 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.253241 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.451103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.523501 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.620217 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.695289 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.720329 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.796383 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.880956 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.919640 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.992287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.013182 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.082528 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.088672 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.097793 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.116147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.126860 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.142943 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.285316 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.334056 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.342964 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.369764 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.453296 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.468938 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.498659 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.585447 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.637401 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.641951 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.836346 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.912576 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.915836 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.071103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.139399 4869 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.209124 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.374610 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.377976 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.377952381 podStartE2EDuration="38.377952381s" podCreationTimestamp="2026-02-02 14:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:38:04.595950288 +0000 UTC m=+286.240587058" watchObservedRunningTime="2026-02-02 14:38:19.377952381 +0000 UTC m=+301.022589151" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.380179 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.380252 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.386531 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.403930 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.403892467 podStartE2EDuration="15.403892467s" podCreationTimestamp="2026-02-02 14:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:38:19.401196901 +0000 UTC m=+301.045833691" watchObservedRunningTime="2026-02-02 14:38:19.403892467 +0000 UTC m=+301.048529237" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.410973 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.412035 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.414937 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.590071 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.602192 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.622031 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.688190 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.724139 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.798668 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.839601 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.888646 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.899800 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.982267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.031643 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.067728 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.104936 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.252672 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.314232 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.386165 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.455800 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.461798 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.567635 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.592323 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.648342 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.690090 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.702275 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.722788 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.781972 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.788966 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.846025 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.848800 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.882699 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.022472 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.031677 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.048244 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.101171 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.102108 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.258706 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.307060 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.335641 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.410204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.430789 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.461857 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.480109 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.524170 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.537649 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.561786 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.599980 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.663365 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.787246 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.787290 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.813391 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.914030 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.945587 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.967054 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.006231 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.042455 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.120513 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.304042 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.307517 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.323761 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.357116 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.388638 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.448327 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.480401 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.481075 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.610457 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.611705 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.669118 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.731995 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.778976 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.793842 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.805642 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.806379 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.839846 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.865192 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.894893 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.919262 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.926997 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.012529 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.018432 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.095755 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.286601 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.335509 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.353254 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.461988 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.485261 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.687089 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.881753 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.954396 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.016572 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.196835 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.240487 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.247679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.270341 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.470958 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.507000 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.607491 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.649816 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.702131 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.714317 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.818674 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.829173 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.887159 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.902132 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.998373 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.029758 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.038639 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.104951 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.114202 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.153834 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.370899 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.379621 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.403221 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.405269 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.514716 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.638163 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.648075 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.730301 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.804003 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.821282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.841783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.849853 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.888575 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.960327 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.024110 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.040322 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.052956 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.076043 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.197085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.224793 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.228208 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.461672 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.543039 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.573848 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.615253 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.642670 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.656245 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.036701 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.247447 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.247805 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" gracePeriod=5 Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.272178 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.401502 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.442330 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.649696 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.680147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.900899 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.942061 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.964520 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.977825 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.115806 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.291463 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.362087 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.411093 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.465585 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.469020 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.678605 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.684050 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.702206 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.830982 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.841461 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.847497 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.868293 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.876485 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.923221 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.953968 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.085707 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.150078 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.161220 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.161645 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.179018 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.214204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.385404 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.399480 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.432234 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.538543 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.539330 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6crm" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="registry-server" containerID="cri-o://6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.545226 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.551197 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h9pgx" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="registry-server" containerID="cri-o://0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.560734 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.561059 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" containerID="cri-o://86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.564654 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.565216 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wrnr2" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="registry-server" containerID="cri-o://1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.580190 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.580591 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k7wp9" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="registry-server" containerID="cri-o://4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.740480 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.764794 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.940871 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.952513 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.036614 4869 generic.go:334] "Generic (PLEG): container finished" podID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerID="86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.036746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" event={"ID":"ee31f112-5156-4239-a760-fb4c6bb9673d","Type":"ContainerDied","Data":"86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.041901 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.042444 4869 generic.go:334] "Generic (PLEG): container finished" podID="7bc37994-d436-4a72-93dd-610683ab871f" containerID="1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.042529 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerDied","Data":"1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049200 4869 generic.go:334] "Generic (PLEG): container finished" podID="20990512-5147-4de8-95e0-f40e2156f395" containerID="6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049258 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerDied","Data":"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerDied","Data":"63b62c3c310182414e285b775897296c2f662f58b08903ff210519308baba3a6"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049350 4869 scope.go:117] "RemoveContainer" containerID="6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.053295 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerID="4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.053411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerDied","Data":"4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.057864 4869 generic.go:334] "Generic (PLEG): container finished" podID="35334030-48c7-4d7e-b202-75371c2c74f0" containerID="0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.057952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerDied","Data":"0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.088323 4869 scope.go:117] "RemoveContainer" containerID="7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.100669 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.117413 4869 scope.go:117] "RemoveContainer" containerID="2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.134195 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") pod \"20990512-5147-4de8-95e0-f40e2156f395\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.134308 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") pod \"20990512-5147-4de8-95e0-f40e2156f395\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.134338 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") pod \"20990512-5147-4de8-95e0-f40e2156f395\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.135843 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities" (OuterVolumeSpecName: "utilities") pod "20990512-5147-4de8-95e0-f40e2156f395" (UID: "20990512-5147-4de8-95e0-f40e2156f395"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.140429 4869 scope.go:117] "RemoveContainer" containerID="6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" Feb 02 14:38:30 crc kubenswrapper[4869]: E0202 14:38:30.140827 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c\": container with ID starting with 6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c not found: ID does not exist" containerID="6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.140867 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c"} err="failed to get container status \"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c\": rpc error: code = NotFound desc = could not find container \"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c\": container with ID starting with 6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c not found: ID does not exist" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.140936 4869 scope.go:117] "RemoveContainer" containerID="7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126" Feb 02 14:38:30 crc kubenswrapper[4869]: E0202 14:38:30.141188 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126\": container with ID starting with 7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126 not found: ID does not exist" containerID="7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.141240 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126"} err="failed to get container status \"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126\": rpc error: code = NotFound desc = could not find container \"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126\": container with ID starting with 7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126 not found: ID does not exist" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.141261 4869 scope.go:117] "RemoveContainer" containerID="2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb" Feb 02 14:38:30 crc kubenswrapper[4869]: E0202 14:38:30.141517 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb\": container with ID starting with 2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb not found: ID does not exist" containerID="2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.141552 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb"} err="failed to get container status \"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb\": rpc error: code = NotFound desc = could not find container \"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb\": container with ID starting with 2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb not found: ID does not exist" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.141756 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd" (OuterVolumeSpecName: "kube-api-access-cd4wd") pod "20990512-5147-4de8-95e0-f40e2156f395" (UID: "20990512-5147-4de8-95e0-f40e2156f395"). InnerVolumeSpecName "kube-api-access-cd4wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.186210 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20990512-5147-4de8-95e0-f40e2156f395" (UID: "20990512-5147-4de8-95e0-f40e2156f395"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.209245 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.237155 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.237196 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.237210 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.384256 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.390536 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.448701 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.529110 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.549834 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.645196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") pod \"c0c32a61-d689-4c79-8348-90c8ab61b594\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.645311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") pod \"c0c32a61-d689-4c79-8348-90c8ab61b594\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.645353 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") pod \"c0c32a61-d689-4c79-8348-90c8ab61b594\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.646308 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities" (OuterVolumeSpecName: "utilities") pod "c0c32a61-d689-4c79-8348-90c8ab61b594" (UID: "c0c32a61-d689-4c79-8348-90c8ab61b594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.649472 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw" (OuterVolumeSpecName: "kube-api-access-4x5bw") pod "c0c32a61-d689-4c79-8348-90c8ab61b594" (UID: "c0c32a61-d689-4c79-8348-90c8ab61b594"). InnerVolumeSpecName "kube-api-access-4x5bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.650197 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.658698 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.676601 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") pod \"7bc37994-d436-4a72-93dd-610683ab871f\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747167 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") pod \"35334030-48c7-4d7e-b202-75371c2c74f0\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747261 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") pod \"35334030-48c7-4d7e-b202-75371c2c74f0\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747296 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") pod \"7bc37994-d436-4a72-93dd-610683ab871f\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747353 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") pod \"7bc37994-d436-4a72-93dd-610683ab871f\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") pod \"35334030-48c7-4d7e-b202-75371c2c74f0\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747731 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747744 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.748974 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities" (OuterVolumeSpecName: "utilities") pod "35334030-48c7-4d7e-b202-75371c2c74f0" (UID: "35334030-48c7-4d7e-b202-75371c2c74f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.748968 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities" (OuterVolumeSpecName: "utilities") pod "7bc37994-d436-4a72-93dd-610683ab871f" (UID: "7bc37994-d436-4a72-93dd-610683ab871f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.754247 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm" (OuterVolumeSpecName: "kube-api-access-44bcm") pod "7bc37994-d436-4a72-93dd-610683ab871f" (UID: "7bc37994-d436-4a72-93dd-610683ab871f"). InnerVolumeSpecName "kube-api-access-44bcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.754326 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn" (OuterVolumeSpecName: "kube-api-access-zpswn") pod "35334030-48c7-4d7e-b202-75371c2c74f0" (UID: "35334030-48c7-4d7e-b202-75371c2c74f0"). InnerVolumeSpecName "kube-api-access-zpswn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.773477 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bc37994-d436-4a72-93dd-610683ab871f" (UID: "7bc37994-d436-4a72-93dd-610683ab871f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.797147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.803810 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35334030-48c7-4d7e-b202-75371c2c74f0" (UID: "35334030-48c7-4d7e-b202-75371c2c74f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.810602 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0c32a61-d689-4c79-8348-90c8ab61b594" (UID: "c0c32a61-d689-4c79-8348-90c8ab61b594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.848578 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") pod \"ee31f112-5156-4239-a760-fb4c6bb9673d\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") pod \"ee31f112-5156-4239-a760-fb4c6bb9673d\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") pod \"ee31f112-5156-4239-a760-fb4c6bb9673d\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849639 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849726 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849805 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849866 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849953 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.850027 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.850104 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ee31f112-5156-4239-a760-fb4c6bb9673d" (UID: "ee31f112-5156-4239-a760-fb4c6bb9673d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.855928 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl" (OuterVolumeSpecName: "kube-api-access-fglxl") pod "ee31f112-5156-4239-a760-fb4c6bb9673d" (UID: "ee31f112-5156-4239-a760-fb4c6bb9673d"). InnerVolumeSpecName "kube-api-access-fglxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.856653 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ee31f112-5156-4239-a760-fb4c6bb9673d" (UID: "ee31f112-5156-4239-a760-fb4c6bb9673d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.890456 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.951485 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.951536 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.951549 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.067779 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.067773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerDied","Data":"4b24ce2f2248f4687d66222d8d64c3f4c7ab1a667da994a65103b5daf7f6074a"} Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.067960 4869 scope.go:117] "RemoveContainer" containerID="4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.071210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" event={"ID":"ee31f112-5156-4239-a760-fb4c6bb9673d","Type":"ContainerDied","Data":"abf150712433e6a69bcdbac96eb8f5a7e4f4678220a199cb5fef1de1079707b8"} Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.071681 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.074754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerDied","Data":"8d9df88387111e57bb9b1545d6cad7ddb2c341d0c3125931bf95ce3cfbbe8249"} Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.074835 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.079976 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerDied","Data":"b1580b4316ca71373b5cb2c825bf6078883c98f4a09960236d48783fdf4eb2b0"} Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.080019 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.080385 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.089380 4869 scope.go:117] "RemoveContainer" containerID="26b06ae64272a38d354c10e93d5b78b359d2c42ba63c10fa86dde8816377339c" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.109509 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.113545 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.122493 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.126595 4869 scope.go:117] "RemoveContainer" containerID="5bd8c5ee8e9e88d2880af3adebbdb0e7854ddadb441729295abb6d7e6958afdd" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.135847 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.141402 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.148743 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.156362 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.157148 4869 scope.go:117] "RemoveContainer" containerID="86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.160581 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.165356 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.171199 4869 scope.go:117] "RemoveContainer" containerID="0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.188179 4869 scope.go:117] "RemoveContainer" containerID="0b2c3ac4d08f82b7a5fad7e7219bf53013c9b65776a69054e3a436bb3b5edd60" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.206524 4869 scope.go:117] "RemoveContainer" containerID="cec776d323dbe8236b1c9db4384ebac1fa16daa022330512eaace0844c3b9f88" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.221359 4869 scope.go:117] "RemoveContainer" containerID="1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.244177 4869 scope.go:117] "RemoveContainer" containerID="5adb81683a3033beec8093b130282168a76c6d84454acac94fe5c2d0d6d3406d" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.261586 4869 scope.go:117] "RemoveContainer" containerID="cdd5576f9f5156d7b56f7ccd77833310c25ec9af1f7cd6b12b8a45a03d8370d2" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.468976 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20990512-5147-4de8-95e0-f40e2156f395" path="/var/lib/kubelet/pods/20990512-5147-4de8-95e0-f40e2156f395/volumes" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.469773 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" path="/var/lib/kubelet/pods/35334030-48c7-4d7e-b202-75371c2c74f0/volumes" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.470407 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bc37994-d436-4a72-93dd-610683ab871f" path="/var/lib/kubelet/pods/7bc37994-d436-4a72-93dd-610683ab871f/volumes" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.471519 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" path="/var/lib/kubelet/pods/c0c32a61-d689-4c79-8348-90c8ab61b594/volumes" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.472309 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" path="/var/lib/kubelet/pods/ee31f112-5156-4239-a760-fb4c6bb9673d/volumes" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.857130 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.857557 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983143 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983388 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983473 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983466 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983506 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983806 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983822 4869 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983832 4869 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983841 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.992430 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.086154 4869 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.097821 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.097901 4869 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" exitCode=137 Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.097984 4869 scope.go:117] "RemoveContainer" containerID="b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.098055 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.116702 4869 scope.go:117] "RemoveContainer" containerID="b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" Feb 02 14:38:33 crc kubenswrapper[4869]: E0202 14:38:33.117648 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2\": container with ID starting with b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2 not found: ID does not exist" containerID="b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.117718 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2"} err="failed to get container status \"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2\": rpc error: code = NotFound desc = could not find container \"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2\": container with ID starting with b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2 not found: ID does not exist" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.470697 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.471503 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.483034 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.483070 4869 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1e0f1580-dcf4-4d0f-9452-87e32349b7e4" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.486967 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.487012 4869 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1e0f1580-dcf4-4d0f-9452-87e32349b7e4" Feb 02 14:38:43 crc kubenswrapper[4869]: I0202 14:38:43.222197 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 14:38:43 crc kubenswrapper[4869]: I0202 14:38:43.645293 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 14:38:44 crc kubenswrapper[4869]: I0202 14:38:44.665783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.188416 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190627 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190680 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48" exitCode=137 Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48"} Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1ea5b7458c59608c72e3a8c6859a0b53705310e26f2ff2566fc22841a8f80c2a"} Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190767 4869 scope.go:117] "RemoveContainer" containerID="24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53" Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.247188 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 14:38:48 crc kubenswrapper[4869]: I0202 14:38:48.198958 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 14:38:50 crc kubenswrapper[4869]: I0202 14:38:50.118736 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.128136 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.180194 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.184477 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.255596 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.393407 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 14:39:01 crc kubenswrapper[4869]: I0202 14:39:01.765474 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659502 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbjts"] Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659780 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659794 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659807 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659813 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659822 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659833 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659849 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659857 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659866 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659873 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659885 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659892 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659927 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659934 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659949 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659958 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659968 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659974 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659983 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" containerName="installer" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659990 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" containerName="installer" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659999 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660007 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.660016 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660023 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.660034 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660040 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.660049 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660055 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.660063 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660071 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660175 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660186 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660197 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660204 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660212 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" containerName="installer" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660219 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660228 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.664403 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.666631 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.666930 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.667575 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.671875 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.681370 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbjts"] Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.767201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.767256 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvf6\" (UniqueName: \"kubernetes.io/projected/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-kube-api-access-6rvf6\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.767290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.868829 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.868925 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvf6\" (UniqueName: \"kubernetes.io/projected/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-kube-api-access-6rvf6\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.868978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.872263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.884215 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.903212 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvf6\" (UniqueName: \"kubernetes.io/projected/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-kube-api-access-6rvf6\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.980202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:03 crc kubenswrapper[4869]: I0202 14:39:03.418648 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbjts"] Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.145595 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.296114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" event={"ID":"ac6a4d49-eb04-4ee1-be26-63f67b0a092a","Type":"ContainerStarted","Data":"b2e029c65d6e48d2645c3fb492df9d470b266ae7b404a1a2155b1b79d629205e"} Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.296183 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" event={"ID":"ac6a4d49-eb04-4ee1-be26-63f67b0a092a","Type":"ContainerStarted","Data":"3eb2749fc9592070d3e3312a947ae9e8dfe258360eea4d3e751f4bb67da2ad1e"} Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.296427 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.299463 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.315227 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" podStartSLOduration=2.3152014530000002 podStartE2EDuration="2.315201453s" podCreationTimestamp="2026-02-02 14:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:39:04.315041798 +0000 UTC m=+345.959678568" watchObservedRunningTime="2026-02-02 14:39:04.315201453 +0000 UTC m=+345.959838223" Feb 02 14:39:05 crc kubenswrapper[4869]: I0202 14:39:05.253080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.380401 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ndh2z"] Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.382208 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.386496 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.393524 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndh2z"] Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.454342 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46dm\" (UniqueName: \"kubernetes.io/projected/13714902-1992-4167-97b5-f3465ce5038f-kube-api-access-m46dm\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.454472 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-utilities\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.454521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-catalog-content\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.555408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-utilities\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.555477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-catalog-content\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.555553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m46dm\" (UniqueName: \"kubernetes.io/projected/13714902-1992-4167-97b5-f3465ce5038f-kube-api-access-m46dm\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.556109 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-utilities\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.556142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-catalog-content\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.578462 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m46dm\" (UniqueName: \"kubernetes.io/projected/13714902-1992-4167-97b5-f3465ce5038f-kube-api-access-m46dm\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.581373 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xjh6d"] Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.583518 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.586413 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.591190 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjh6d"] Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.657064 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmm5v\" (UniqueName: \"kubernetes.io/projected/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-kube-api-access-cmm5v\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.657177 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-catalog-content\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.657197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-utilities\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.701028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-utilities\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759158 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-catalog-content\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmm5v\" (UniqueName: \"kubernetes.io/projected/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-kube-api-access-cmm5v\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-utilities\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759795 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-catalog-content\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.781735 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmm5v\" (UniqueName: \"kubernetes.io/projected/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-kube-api-access-cmm5v\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.918419 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.122902 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjh6d"] Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.149284 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndh2z"] Feb 02 14:39:09 crc kubenswrapper[4869]: W0202 14:39:09.156877 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13714902_1992_4167_97b5_f3465ce5038f.slice/crio-06a6bdd4349391969f8c35b406e4d27ba4a3a45bed65800b6f56feea63ff741a WatchSource:0}: Error finding container 06a6bdd4349391969f8c35b406e4d27ba4a3a45bed65800b6f56feea63ff741a: Status 404 returned error can't find the container with id 06a6bdd4349391969f8c35b406e4d27ba4a3a45bed65800b6f56feea63ff741a Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.335378 4869 generic.go:334] "Generic (PLEG): container finished" podID="13714902-1992-4167-97b5-f3465ce5038f" containerID="70989fe11ed14396a31642b6c670ee78915afd5b782f6428feb661ae40b98ce9" exitCode=0 Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.335472 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerDied","Data":"70989fe11ed14396a31642b6c670ee78915afd5b782f6428feb661ae40b98ce9"} Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.335895 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerStarted","Data":"06a6bdd4349391969f8c35b406e4d27ba4a3a45bed65800b6f56feea63ff741a"} Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.340405 4869 generic.go:334] "Generic (PLEG): container finished" podID="5e1c62bb-e047-4367-9cd0-572ac75fd6f6" containerID="363a6e67ae8e4aad0256851aded7eebb05e4e2c2143f2c26da007d4540107db2" exitCode=0 Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.340478 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerDied","Data":"363a6e67ae8e4aad0256851aded7eebb05e4e2c2143f2c26da007d4540107db2"} Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.340519 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerStarted","Data":"39c11bf6bed1d7b894405306c81ec4e98b915aefee768e0f754abb720e2c0c31"} Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.177314 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7q5gz"] Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.178631 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.182308 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.202869 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q5gz"] Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.281265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-utilities\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.281393 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-catalog-content\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.281457 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4cz\" (UniqueName: \"kubernetes.io/projected/395af9bf-292b-41d1-a4ad-e4983331bc2d-kube-api-access-tq4cz\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.347886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerStarted","Data":"fb751668cace5e796104b0026041db69850061244a846942538cac63d0630eea"} Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.383682 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-utilities\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.383751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-catalog-content\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.383775 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq4cz\" (UniqueName: \"kubernetes.io/projected/395af9bf-292b-41d1-a4ad-e4983331bc2d-kube-api-access-tq4cz\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.384455 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-utilities\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.384552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-catalog-content\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.408266 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq4cz\" (UniqueName: \"kubernetes.io/projected/395af9bf-292b-41d1-a4ad-e4983331bc2d-kube-api-access-tq4cz\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.516753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.923534 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q5gz"] Feb 02 14:39:10 crc kubenswrapper[4869]: W0202 14:39:10.929198 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod395af9bf_292b_41d1_a4ad_e4983331bc2d.slice/crio-501049222b7ae4b82c4c0607218c41ac98e9b96feec883f82892305d12a80f06 WatchSource:0}: Error finding container 501049222b7ae4b82c4c0607218c41ac98e9b96feec883f82892305d12a80f06: Status 404 returned error can't find the container with id 501049222b7ae4b82c4c0607218c41ac98e9b96feec883f82892305d12a80f06 Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.045807 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.175156 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hh8gt"] Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.176404 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.179101 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.227286 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh8gt"] Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.299327 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9tml\" (UniqueName: \"kubernetes.io/projected/59d9a56c-d3b3-438c-8047-097cb18004b1-kube-api-access-z9tml\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.299396 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-catalog-content\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.299434 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-utilities\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.356580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerStarted","Data":"e1e7140bac7235af94cee6b6434ebda86378ecaef383f2e6f017a7c810a50cf2"} Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.359368 4869 generic.go:334] "Generic (PLEG): container finished" podID="5e1c62bb-e047-4367-9cd0-572ac75fd6f6" containerID="fb751668cace5e796104b0026041db69850061244a846942538cac63d0630eea" exitCode=0 Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.359410 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerDied","Data":"fb751668cace5e796104b0026041db69850061244a846942538cac63d0630eea"} Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.361405 4869 generic.go:334] "Generic (PLEG): container finished" podID="395af9bf-292b-41d1-a4ad-e4983331bc2d" containerID="7a31feacb682d936469883a845a376b5718e8a273369759d9c64ae025eba3375" exitCode=0 Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.361448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerDied","Data":"7a31feacb682d936469883a845a376b5718e8a273369759d9c64ae025eba3375"} Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.361476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerStarted","Data":"501049222b7ae4b82c4c0607218c41ac98e9b96feec883f82892305d12a80f06"} Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.401173 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-catalog-content\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.401310 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-utilities\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.401511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9tml\" (UniqueName: \"kubernetes.io/projected/59d9a56c-d3b3-438c-8047-097cb18004b1-kube-api-access-z9tml\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.402523 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-catalog-content\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.403239 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-utilities\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.445024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9tml\" (UniqueName: \"kubernetes.io/projected/59d9a56c-d3b3-438c-8047-097cb18004b1-kube-api-access-z9tml\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.502172 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.920519 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh8gt"] Feb 02 14:39:11 crc kubenswrapper[4869]: W0202 14:39:11.932036 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59d9a56c_d3b3_438c_8047_097cb18004b1.slice/crio-2fa8f1354e6116354194520bc4fdf4b7f7df4c9be744a0528f5d3d4f4de72d24 WatchSource:0}: Error finding container 2fa8f1354e6116354194520bc4fdf4b7f7df4c9be744a0528f5d3d4f4de72d24: Status 404 returned error can't find the container with id 2fa8f1354e6116354194520bc4fdf4b7f7df4c9be744a0528f5d3d4f4de72d24 Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.371013 4869 generic.go:334] "Generic (PLEG): container finished" podID="13714902-1992-4167-97b5-f3465ce5038f" containerID="e1e7140bac7235af94cee6b6434ebda86378ecaef383f2e6f017a7c810a50cf2" exitCode=0 Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.371130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerDied","Data":"e1e7140bac7235af94cee6b6434ebda86378ecaef383f2e6f017a7c810a50cf2"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.375836 4869 generic.go:334] "Generic (PLEG): container finished" podID="59d9a56c-d3b3-438c-8047-097cb18004b1" containerID="b9985f7429a2c20cbb511e0d24812ea1a14753155ff1da9a07857a29232435e8" exitCode=0 Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.375885 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh8gt" event={"ID":"59d9a56c-d3b3-438c-8047-097cb18004b1","Type":"ContainerDied","Data":"b9985f7429a2c20cbb511e0d24812ea1a14753155ff1da9a07857a29232435e8"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.375955 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh8gt" event={"ID":"59d9a56c-d3b3-438c-8047-097cb18004b1","Type":"ContainerStarted","Data":"2fa8f1354e6116354194520bc4fdf4b7f7df4c9be744a0528f5d3d4f4de72d24"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.378822 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerStarted","Data":"57553b2bca8f926094153a6c4f01060b889ab14dcd6016ab000b936c1106578e"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.382622 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerStarted","Data":"849d9e1614ff29db841e7f9af8ed8e15dcbbf2f5c650a9ffc2905934514e6149"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.444655 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xjh6d" podStartSLOduration=1.977796587 podStartE2EDuration="4.444625124s" podCreationTimestamp="2026-02-02 14:39:08 +0000 UTC" firstStartedPulling="2026-02-02 14:39:09.343140166 +0000 UTC m=+350.987776936" lastFinishedPulling="2026-02-02 14:39:11.809968703 +0000 UTC m=+353.454605473" observedRunningTime="2026-02-02 14:39:12.440864132 +0000 UTC m=+354.085500902" watchObservedRunningTime="2026-02-02 14:39:12.444625124 +0000 UTC m=+354.089261894" Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.391522 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerStarted","Data":"a7b0389137253af6d37e348c1d32878d1bec9ddf49549469a52daa6efff33817"} Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.394258 4869 generic.go:334] "Generic (PLEG): container finished" podID="59d9a56c-d3b3-438c-8047-097cb18004b1" containerID="944668fb54c4b310f8a7b8e62680329cda99afcb1e1be80d5665ae6eb46ba989" exitCode=0 Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.394345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh8gt" event={"ID":"59d9a56c-d3b3-438c-8047-097cb18004b1","Type":"ContainerDied","Data":"944668fb54c4b310f8a7b8e62680329cda99afcb1e1be80d5665ae6eb46ba989"} Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.396738 4869 generic.go:334] "Generic (PLEG): container finished" podID="395af9bf-292b-41d1-a4ad-e4983331bc2d" containerID="849d9e1614ff29db841e7f9af8ed8e15dcbbf2f5c650a9ffc2905934514e6149" exitCode=0 Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.396796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerDied","Data":"849d9e1614ff29db841e7f9af8ed8e15dcbbf2f5c650a9ffc2905934514e6149"} Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.414795 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ndh2z" podStartSLOduration=1.953279125 podStartE2EDuration="5.414778307s" podCreationTimestamp="2026-02-02 14:39:08 +0000 UTC" firstStartedPulling="2026-02-02 14:39:09.338118262 +0000 UTC m=+350.982755032" lastFinishedPulling="2026-02-02 14:39:12.799617444 +0000 UTC m=+354.444254214" observedRunningTime="2026-02-02 14:39:13.414205592 +0000 UTC m=+355.058842382" watchObservedRunningTime="2026-02-02 14:39:13.414778307 +0000 UTC m=+355.059415077" Feb 02 14:39:14 crc kubenswrapper[4869]: I0202 14:39:14.403942 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerStarted","Data":"fe7f752c7371146161b3322d19658bbf3624d19b144a69cd2446d1591c6d5154"} Feb 02 14:39:14 crc kubenswrapper[4869]: I0202 14:39:14.407084 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh8gt" event={"ID":"59d9a56c-d3b3-438c-8047-097cb18004b1","Type":"ContainerStarted","Data":"5c08a05becbc3df96b38abb582855ce693566fa31b29b170cf6a5dbdd37b6239"} Feb 02 14:39:14 crc kubenswrapper[4869]: I0202 14:39:14.450087 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7q5gz" podStartSLOduration=1.99627111 podStartE2EDuration="4.450071668s" podCreationTimestamp="2026-02-02 14:39:10 +0000 UTC" firstStartedPulling="2026-02-02 14:39:11.363418064 +0000 UTC m=+353.008054834" lastFinishedPulling="2026-02-02 14:39:13.817218622 +0000 UTC m=+355.461855392" observedRunningTime="2026-02-02 14:39:14.430051937 +0000 UTC m=+356.074688707" watchObservedRunningTime="2026-02-02 14:39:14.450071668 +0000 UTC m=+356.094708438" Feb 02 14:39:14 crc kubenswrapper[4869]: I0202 14:39:14.450828 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hh8gt" podStartSLOduration=1.91828054 podStartE2EDuration="3.450822677s" podCreationTimestamp="2026-02-02 14:39:11 +0000 UTC" firstStartedPulling="2026-02-02 14:39:12.377543506 +0000 UTC m=+354.022180276" lastFinishedPulling="2026-02-02 14:39:13.910085643 +0000 UTC m=+355.554722413" observedRunningTime="2026-02-02 14:39:14.449320999 +0000 UTC m=+356.093957769" watchObservedRunningTime="2026-02-02 14:39:14.450822677 +0000 UTC m=+356.095459447" Feb 02 14:39:15 crc kubenswrapper[4869]: I0202 14:39:15.305030 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:39:15 crc kubenswrapper[4869]: I0202 14:39:15.305110 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.701876 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.702350 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.746490 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.918781 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.918859 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.976307 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:19 crc kubenswrapper[4869]: I0202 14:39:19.488680 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:19 crc kubenswrapper[4869]: I0202 14:39:19.489223 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:20 crc kubenswrapper[4869]: I0202 14:39:20.518211 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:20 crc kubenswrapper[4869]: I0202 14:39:20.518259 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:20 crc kubenswrapper[4869]: I0202 14:39:20.574537 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:21 crc kubenswrapper[4869]: I0202 14:39:21.492997 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:21 crc kubenswrapper[4869]: I0202 14:39:21.504109 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:21 crc kubenswrapper[4869]: I0202 14:39:21.504203 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:21 crc kubenswrapper[4869]: I0202 14:39:21.548649 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:22 crc kubenswrapper[4869]: I0202 14:39:22.508839 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:45 crc kubenswrapper[4869]: I0202 14:39:45.304178 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:39:45 crc kubenswrapper[4869]: I0202 14:39:45.305273 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.701237 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvqz"] Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.702618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.725092 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvqz"] Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.797745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b0f13f-4134-4679-9f31-aef45d67a17e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.797835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b0f13f-4134-4679-9f31-aef45d67a17e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-trusted-ca\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798409 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-certificates\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzlh\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-kube-api-access-gkzlh\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798585 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-tls\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.832689 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899677 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b0f13f-4134-4679-9f31-aef45d67a17e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b0f13f-4134-4679-9f31-aef45d67a17e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-trusted-ca\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899820 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899853 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-certificates\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkzlh\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-kube-api-access-gkzlh\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-tls\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.900492 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b0f13f-4134-4679-9f31-aef45d67a17e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.901866 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-certificates\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.902556 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-trusted-ca\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.908037 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b0f13f-4134-4679-9f31-aef45d67a17e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.908606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-tls\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.918125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.919044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkzlh\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-kube-api-access-gkzlh\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.021819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.444000 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvqz"] Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.632837 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" event={"ID":"68b0f13f-4134-4679-9f31-aef45d67a17e","Type":"ContainerStarted","Data":"32aed9b4821c581a755924819611e1370a6a0f4dcb8740689d02a250b4b34b9e"} Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.632899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" event={"ID":"68b0f13f-4134-4679-9f31-aef45d67a17e","Type":"ContainerStarted","Data":"bfb6bf8a6f3421fa190fbe7d00511fb3c6e005376a108640c41108024d9c8e31"} Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.633007 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.654879 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" podStartSLOduration=1.654861419 podStartE2EDuration="1.654861419s" podCreationTimestamp="2026-02-02 14:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:39:50.650227355 +0000 UTC m=+392.294864145" watchObservedRunningTime="2026-02-02 14:39:50.654861419 +0000 UTC m=+392.299498179" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.598609 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.599636 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.600974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.612901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.863404 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.613864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.614770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.622547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.622569 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.671006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3522fd55108264ab7d8c239ae644ed2ab9033308946e948fcf49170011ce4de1"} Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.671060 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"70137a8a2f3c20fb6a39efa808c246b234aab2a7c954f80bbd0795e5f798f3f9"} Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.763093 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.867214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:39:57 crc kubenswrapper[4869]: W0202 14:39:57.070592 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a9d7795010bda515a9d4e7c2c97aa0d56dbaf432907e52f0c24bd70a78b5fd40 WatchSource:0}: Error finding container a9d7795010bda515a9d4e7c2c97aa0d56dbaf432907e52f0c24bd70a78b5fd40: Status 404 returned error can't find the container with id a9d7795010bda515a9d4e7c2c97aa0d56dbaf432907e52f0c24bd70a78b5fd40 Feb 02 14:39:57 crc kubenswrapper[4869]: W0202 14:39:57.119302 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e64a996288904f0c153f914c025ff66d17d0938a9115012b43d2881b4f0d551a WatchSource:0}: Error finding container e64a996288904f0c153f914c025ff66d17d0938a9115012b43d2881b4f0d551a: Status 404 returned error can't find the container with id e64a996288904f0c153f914c025ff66d17d0938a9115012b43d2881b4f0d551a Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.677454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"840587376a761b12f0164e7ebc684fac3c74f6d95b8a3d7695db7160ea95cd4c"} Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.677957 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e64a996288904f0c153f914c025ff66d17d0938a9115012b43d2881b4f0d551a"} Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.679582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1d3d7e2acd859a1a5e44debb32a8531cebcbe65c335e23d8ffaee1119f5492e9"} Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.679620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a9d7795010bda515a9d4e7c2c97aa0d56dbaf432907e52f0c24bd70a78b5fd40"} Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.679795 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:40:10 crc kubenswrapper[4869]: I0202 14:40:10.029788 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:40:10 crc kubenswrapper[4869]: I0202 14:40:10.094418 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.304038 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.305760 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.305877 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.306668 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.306830 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24" gracePeriod=600 Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.821280 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24" exitCode=0 Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.821352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24"} Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.821402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa"} Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.821442 4869 scope.go:117] "RemoveContainer" containerID="322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.143684 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerName="registry" containerID="cri-o://d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" gracePeriod=30 Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.522230 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670392 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670460 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670516 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670930 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.671001 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.672586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.672740 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.678348 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.678837 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx" (OuterVolumeSpecName: "kube-api-access-2xsnx") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "kube-api-access-2xsnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.679425 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.683898 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.689554 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.690574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773071 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773135 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773147 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773163 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773172 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773182 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773190 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972362 4869 generic.go:334] "Generic (PLEG): container finished" podID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerID="d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" exitCode=0 Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" event={"ID":"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97","Type":"ContainerDied","Data":"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a"} Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" event={"ID":"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97","Type":"ContainerDied","Data":"01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf"} Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972465 4869 scope.go:117] "RemoveContainer" containerID="d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972608 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.997057 4869 scope.go:117] "RemoveContainer" containerID="d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" Feb 02 14:40:36 crc kubenswrapper[4869]: E0202 14:40:36.000779 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a\": container with ID starting with d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a not found: ID does not exist" containerID="d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" Feb 02 14:40:36 crc kubenswrapper[4869]: I0202 14:40:36.000899 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a"} err="failed to get container status \"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a\": rpc error: code = NotFound desc = could not find container \"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a\": container with ID starting with d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a not found: ID does not exist" Feb 02 14:40:36 crc kubenswrapper[4869]: I0202 14:40:36.014309 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:40:36 crc kubenswrapper[4869]: I0202 14:40:36.020167 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:40:36 crc kubenswrapper[4869]: I0202 14:40:36.777823 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:40:37 crc kubenswrapper[4869]: I0202 14:40:37.475778 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" path="/var/lib/kubelet/pods/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97/volumes" Feb 02 14:42:15 crc kubenswrapper[4869]: I0202 14:42:15.304458 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:42:15 crc kubenswrapper[4869]: I0202 14:42:15.306182 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:42:45 crc kubenswrapper[4869]: I0202 14:42:45.304868 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:42:45 crc kubenswrapper[4869]: I0202 14:42:45.305621 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.304770 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.305561 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.305638 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.306399 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.306472 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa" gracePeriod=600 Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.968706 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa" exitCode=0 Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.968806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa"} Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.969476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486"} Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.969512 4869 scope.go:117] "RemoveContainer" containerID="cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.363515 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-498mc"] Feb 02 14:44:11 crc kubenswrapper[4869]: E0202 14:44:11.364616 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerName="registry" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.364639 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerName="registry" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.364800 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerName="registry" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.365396 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.368557 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.368585 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.368607 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-66t7x" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.376596 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7j57w"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.377662 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.390695 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-498mc"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.397855 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vd825" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.415921 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7j57w"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.434586 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dfqjm"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.435703 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.437020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm56c\" (UniqueName: \"kubernetes.io/projected/92227558-4fbe-40b7-8a51-f9ba7043125a-kube-api-access-nm56c\") pod \"cert-manager-cainjector-cf98fcc89-498mc\" (UID: \"92227558-4fbe-40b7-8a51-f9ba7043125a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.437124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvjl\" (UniqueName: \"kubernetes.io/projected/d96c83c3-8f98-40c8-85f8-37cdf10eaeb7-kube-api-access-9vvjl\") pod \"cert-manager-858654f9db-7j57w\" (UID: \"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7\") " pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.439361 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5c7xq" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.445475 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dfqjm"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.539892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvjl\" (UniqueName: \"kubernetes.io/projected/d96c83c3-8f98-40c8-85f8-37cdf10eaeb7-kube-api-access-9vvjl\") pod \"cert-manager-858654f9db-7j57w\" (UID: \"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7\") " pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.540379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdq7\" (UniqueName: \"kubernetes.io/projected/804bb5fc-4d8e-4f9f-892b-6d9af2943dbd-kube-api-access-xgdq7\") pod \"cert-manager-webhook-687f57d79b-dfqjm\" (UID: \"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.540421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm56c\" (UniqueName: \"kubernetes.io/projected/92227558-4fbe-40b7-8a51-f9ba7043125a-kube-api-access-nm56c\") pod \"cert-manager-cainjector-cf98fcc89-498mc\" (UID: \"92227558-4fbe-40b7-8a51-f9ba7043125a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.566140 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm56c\" (UniqueName: \"kubernetes.io/projected/92227558-4fbe-40b7-8a51-f9ba7043125a-kube-api-access-nm56c\") pod \"cert-manager-cainjector-cf98fcc89-498mc\" (UID: \"92227558-4fbe-40b7-8a51-f9ba7043125a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.566186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvjl\" (UniqueName: \"kubernetes.io/projected/d96c83c3-8f98-40c8-85f8-37cdf10eaeb7-kube-api-access-9vvjl\") pod \"cert-manager-858654f9db-7j57w\" (UID: \"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7\") " pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.641795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdq7\" (UniqueName: \"kubernetes.io/projected/804bb5fc-4d8e-4f9f-892b-6d9af2943dbd-kube-api-access-xgdq7\") pod \"cert-manager-webhook-687f57d79b-dfqjm\" (UID: \"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.664823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdq7\" (UniqueName: \"kubernetes.io/projected/804bb5fc-4d8e-4f9f-892b-6d9af2943dbd-kube-api-access-xgdq7\") pod \"cert-manager-webhook-687f57d79b-dfqjm\" (UID: \"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.689521 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.702999 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.759569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.068399 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dfqjm"] Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.078797 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.192755 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-498mc"] Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.195951 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7j57w"] Feb 02 14:44:12 crc kubenswrapper[4869]: W0202 14:44:12.200591 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92227558_4fbe_40b7_8a51_f9ba7043125a.slice/crio-4994d6ebd85ca925e822fc17a88dfd9e3c4dcb6e2547b012400157cf4cb5801b WatchSource:0}: Error finding container 4994d6ebd85ca925e822fc17a88dfd9e3c4dcb6e2547b012400157cf4cb5801b: Status 404 returned error can't find the container with id 4994d6ebd85ca925e822fc17a88dfd9e3c4dcb6e2547b012400157cf4cb5801b Feb 02 14:44:12 crc kubenswrapper[4869]: W0202 14:44:12.202857 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96c83c3_8f98_40c8_85f8_37cdf10eaeb7.slice/crio-83a9a494d08e310642efbdfcf8c5935b45230f66b4fbcb19370d983accf62dd5 WatchSource:0}: Error finding container 83a9a494d08e310642efbdfcf8c5935b45230f66b4fbcb19370d983accf62dd5: Status 404 returned error can't find the container with id 83a9a494d08e310642efbdfcf8c5935b45230f66b4fbcb19370d983accf62dd5 Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.332769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7j57w" event={"ID":"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7","Type":"ContainerStarted","Data":"83a9a494d08e310642efbdfcf8c5935b45230f66b4fbcb19370d983accf62dd5"} Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.334361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" event={"ID":"92227558-4fbe-40b7-8a51-f9ba7043125a","Type":"ContainerStarted","Data":"4994d6ebd85ca925e822fc17a88dfd9e3c4dcb6e2547b012400157cf4cb5801b"} Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.335682 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" event={"ID":"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd","Type":"ContainerStarted","Data":"65051957c6e408a3cb9a29d050951c2b90d76c6dd42e58fb0d821538e0a2e0e9"} Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.430397 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7j57w" event={"ID":"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7","Type":"ContainerStarted","Data":"0f22eb8fa541be17ecade5beb6c29aff2ab4b25b0f1cb555ca484a406d45f81b"} Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.433092 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" event={"ID":"92227558-4fbe-40b7-8a51-f9ba7043125a","Type":"ContainerStarted","Data":"a13b64ac43b4ac85dd7f9f794c3d9573e2f89b04d18ee26f581cb4a91a2b1bf1"} Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.435488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" event={"ID":"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd","Type":"ContainerStarted","Data":"2b1494928ffdf68d62788d8e79f52641c3176be54728602de0852e36e5b9607b"} Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.435662 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.451846 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7j57w" podStartSLOduration=2.519794061 podStartE2EDuration="7.451818227s" podCreationTimestamp="2026-02-02 14:44:11 +0000 UTC" firstStartedPulling="2026-02-02 14:44:12.210674421 +0000 UTC m=+653.855311191" lastFinishedPulling="2026-02-02 14:44:17.142698587 +0000 UTC m=+658.787335357" observedRunningTime="2026-02-02 14:44:18.445138812 +0000 UTC m=+660.089775582" watchObservedRunningTime="2026-02-02 14:44:18.451818227 +0000 UTC m=+660.096454997" Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.470807 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" podStartSLOduration=2.2600140189999998 podStartE2EDuration="7.470782455s" podCreationTimestamp="2026-02-02 14:44:11 +0000 UTC" firstStartedPulling="2026-02-02 14:44:12.07855907 +0000 UTC m=+653.723195840" lastFinishedPulling="2026-02-02 14:44:17.289327506 +0000 UTC m=+658.933964276" observedRunningTime="2026-02-02 14:44:18.469131185 +0000 UTC m=+660.113767955" watchObservedRunningTime="2026-02-02 14:44:18.470782455 +0000 UTC m=+660.115419225" Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.496076 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" podStartSLOduration=2.414307538 podStartE2EDuration="7.496049899s" podCreationTimestamp="2026-02-02 14:44:11 +0000 UTC" firstStartedPulling="2026-02-02 14:44:12.203530885 +0000 UTC m=+653.848167655" lastFinishedPulling="2026-02-02 14:44:17.285273246 +0000 UTC m=+658.929910016" observedRunningTime="2026-02-02 14:44:18.490263656 +0000 UTC m=+660.134900426" watchObservedRunningTime="2026-02-02 14:44:18.496049899 +0000 UTC m=+660.140686669" Feb 02 14:44:26 crc kubenswrapper[4869]: I0202 14:44:26.763161 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.882784 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmsw6"] Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.885894 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-controller" containerID="cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886029 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="northd" containerID="cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886091 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-node" containerID="cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886186 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" containerID="cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886280 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-acl-logging" containerID="cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886254 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886251 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" containerID="cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.920733 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" containerID="cri-o://4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" gracePeriod=30 Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.738548 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.738582 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.740048 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.740044 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.741668 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.741690 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.741718 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.741765 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" Feb 02 14:44:45 crc kubenswrapper[4869]: I0202 14:44:45.589724 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:44:45 crc kubenswrapper[4869]: I0202 14:44:45.592726 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-acl-logging/0.log" Feb 02 14:44:45 crc kubenswrapper[4869]: I0202 14:44:45.593960 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" exitCode=143 Feb 02 14:44:45 crc kubenswrapper[4869]: I0202 14:44:45.594049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.729809 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.735701 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-acl-logging/0.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.736668 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-controller/0.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737287 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" exitCode=0 Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737317 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" exitCode=0 Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737326 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" exitCode=0 Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737335 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" exitCode=143 Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737357 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737400 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737412 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737429 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.830550 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-acl-logging/0.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.831416 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-controller/0.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.833589 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.914943 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7pc72"] Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915230 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915247 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915261 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915269 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915283 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915290 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915303 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-node" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915310 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-node" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915322 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="northd" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915329 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="northd" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915339 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kubecfg-setup" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915346 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kubecfg-setup" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915361 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915368 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915377 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915384 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915392 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915399 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915407 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-acl-logging" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915416 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-acl-logging" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915426 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915434 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915557 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-acl-logging" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915566 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915578 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915587 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915598 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915607 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915614 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915624 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-node" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915633 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915642 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="northd" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915655 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915769 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915778 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915789 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915804 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915959 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.918046 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946718 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946786 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946817 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946850 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946869 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946930 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946946 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947041 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947131 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947162 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947182 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947201 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947284 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947351 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948517 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948595 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948621 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948642 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash" (OuterVolumeSpecName: "host-slash") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948663 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log" (OuterVolumeSpecName: "node-log") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948685 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948708 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948729 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948757 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948779 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket" (OuterVolumeSpecName: "log-socket") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948805 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951142 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951479 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951522 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.952580 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.958460 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk" (OuterVolumeSpecName: "kube-api-access-r9lzk") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "kube-api-access-r9lzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.982261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.010590 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048663 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmflx\" (UniqueName: \"kubernetes.io/projected/87557492-f711-45db-abc2-beb315e8aad6-kube-api-access-hmflx\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-var-lib-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048776 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-script-lib\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-kubelet\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-bin\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048833 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-netd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048849 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-slash\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048864 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-ovn\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048923 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048950 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-systemd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-node-log\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048984 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-systemd-units\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049003 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-etc-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-log-socket\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049045 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-env-overrides\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049062 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-netns\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049084 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87557492-f711-45db-abc2-beb315e8aad6-ovn-node-metrics-cert\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-config\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049148 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049248 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049407 4869 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049423 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049433 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049443 4869 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049452 4869 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049461 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049471 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049508 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049524 4869 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049537 4869 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049546 4869 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049556 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049566 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049574 4869 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049582 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049590 4869 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049612 4869 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049621 4869 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150540 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87557492-f711-45db-abc2-beb315e8aad6-ovn-node-metrics-cert\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-config\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150654 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmflx\" (UniqueName: \"kubernetes.io/projected/87557492-f711-45db-abc2-beb315e8aad6-kube-api-access-hmflx\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-var-lib-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150722 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150750 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-script-lib\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150774 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-kubelet\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-bin\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150819 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-slash\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-var-lib-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-netd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-netd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-ovn\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151141 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-systemd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151181 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-node-log\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151181 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-ovn\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-systemd-units\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-systemd-units\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151347 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-node-log\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-systemd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151334 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151383 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-kubelet\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151402 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-slash\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151411 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-bin\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-etc-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-config\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-script-lib\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151446 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-etc-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151685 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-log-socket\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-log-socket\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-env-overrides\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151794 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-netns\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151872 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-netns\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.152284 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-env-overrides\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.155038 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87557492-f711-45db-abc2-beb315e8aad6-ovn-node-metrics-cert\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.171376 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmflx\" (UniqueName: \"kubernetes.io/projected/87557492-f711-45db-abc2-beb315e8aad6-kube-api-access-hmflx\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.240753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.745545 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/2.log" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.746882 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/1.log" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.746977 4869 generic.go:334] "Generic (PLEG): container finished" podID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" containerID="9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9" exitCode=2 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.747114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerDied","Data":"9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.747226 4869 scope.go:117] "RemoveContainer" containerID="e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.747994 4869 scope.go:117] "RemoveContainer" containerID="9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9" Feb 02 14:44:50 crc kubenswrapper[4869]: E0202 14:44:50.748228 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-d9vfd_openshift-multus(45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0)\"" pod="openshift-multus/multus-d9vfd" podUID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.749075 4869 generic.go:334] "Generic (PLEG): container finished" podID="87557492-f711-45db-abc2-beb315e8aad6" containerID="8a3f19721a174c0e4bcdc49eaa3b066e19b9f2c36326a3f3437ec28910709dd3" exitCode=0 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.749176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerDied","Data":"8a3f19721a174c0e4bcdc49eaa3b066e19b9f2c36326a3f3437ec28910709dd3"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.749211 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"92cf3f0e5b2246382d6a71f4fd45d0dbd5ee40c72954ad51a49843bcff8dfeda"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.760399 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-acl-logging/0.log" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.760968 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-controller/0.log" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761495 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" exitCode=0 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761542 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" exitCode=0 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761550 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" exitCode=0 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761627 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761763 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"ca0e0f37b2bf3d240e5eeec5425678446780834f9687e86b8adc4295de855905"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.840484 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmsw6"] Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.840889 4869 scope.go:117] "RemoveContainer" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.859412 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmsw6"] Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.877231 4869 scope.go:117] "RemoveContainer" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.910990 4869 scope.go:117] "RemoveContainer" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.929082 4869 scope.go:117] "RemoveContainer" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.952731 4869 scope.go:117] "RemoveContainer" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.970218 4869 scope.go:117] "RemoveContainer" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.001158 4869 scope.go:117] "RemoveContainer" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.021171 4869 scope.go:117] "RemoveContainer" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.053239 4869 scope.go:117] "RemoveContainer" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.080878 4869 scope.go:117] "RemoveContainer" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.081966 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": container with ID starting with 4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30 not found: ID does not exist" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082003 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} err="failed to get container status \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": rpc error: code = NotFound desc = could not find container \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": container with ID starting with 4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082037 4869 scope.go:117] "RemoveContainer" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.082310 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": container with ID starting with 6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb not found: ID does not exist" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082337 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} err="failed to get container status \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": rpc error: code = NotFound desc = could not find container \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": container with ID starting with 6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082351 4869 scope.go:117] "RemoveContainer" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.082693 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": container with ID starting with 42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c not found: ID does not exist" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082715 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} err="failed to get container status \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": rpc error: code = NotFound desc = could not find container \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": container with ID starting with 42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082728 4869 scope.go:117] "RemoveContainer" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.083053 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": container with ID starting with f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0 not found: ID does not exist" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083099 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} err="failed to get container status \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": rpc error: code = NotFound desc = could not find container \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": container with ID starting with f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083118 4869 scope.go:117] "RemoveContainer" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.083460 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": container with ID starting with 58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9 not found: ID does not exist" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083487 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} err="failed to get container status \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": rpc error: code = NotFound desc = could not find container \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": container with ID starting with 58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083503 4869 scope.go:117] "RemoveContainer" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.083891 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": container with ID starting with 2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f not found: ID does not exist" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083941 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} err="failed to get container status \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": rpc error: code = NotFound desc = could not find container \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": container with ID starting with 2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083960 4869 scope.go:117] "RemoveContainer" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.084185 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": container with ID starting with 236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5 not found: ID does not exist" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.084212 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} err="failed to get container status \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": rpc error: code = NotFound desc = could not find container \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": container with ID starting with 236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.084229 4869 scope.go:117] "RemoveContainer" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.084561 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": container with ID starting with 879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9 not found: ID does not exist" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.084589 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} err="failed to get container status \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": rpc error: code = NotFound desc = could not find container \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": container with ID starting with 879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.084603 4869 scope.go:117] "RemoveContainer" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.084981 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": container with ID starting with dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a not found: ID does not exist" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085004 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a"} err="failed to get container status \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": rpc error: code = NotFound desc = could not find container \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": container with ID starting with dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085018 4869 scope.go:117] "RemoveContainer" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085303 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} err="failed to get container status \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": rpc error: code = NotFound desc = could not find container \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": container with ID starting with 4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085324 4869 scope.go:117] "RemoveContainer" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085631 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} err="failed to get container status \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": rpc error: code = NotFound desc = could not find container \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": container with ID starting with 6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085653 4869 scope.go:117] "RemoveContainer" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.086167 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} err="failed to get container status \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": rpc error: code = NotFound desc = could not find container \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": container with ID starting with 42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.086188 4869 scope.go:117] "RemoveContainer" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.086516 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} err="failed to get container status \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": rpc error: code = NotFound desc = could not find container \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": container with ID starting with f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.086572 4869 scope.go:117] "RemoveContainer" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087026 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} err="failed to get container status \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": rpc error: code = NotFound desc = could not find container \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": container with ID starting with 58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087050 4869 scope.go:117] "RemoveContainer" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087360 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} err="failed to get container status \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": rpc error: code = NotFound desc = could not find container \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": container with ID starting with 2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087382 4869 scope.go:117] "RemoveContainer" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087690 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} err="failed to get container status \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": rpc error: code = NotFound desc = could not find container \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": container with ID starting with 236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087716 4869 scope.go:117] "RemoveContainer" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088055 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} err="failed to get container status \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": rpc error: code = NotFound desc = could not find container \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": container with ID starting with 879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088139 4869 scope.go:117] "RemoveContainer" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088413 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a"} err="failed to get container status \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": rpc error: code = NotFound desc = could not find container \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": container with ID starting with dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088441 4869 scope.go:117] "RemoveContainer" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088688 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} err="failed to get container status \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": rpc error: code = NotFound desc = could not find container \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": container with ID starting with 4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088717 4869 scope.go:117] "RemoveContainer" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.089120 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} err="failed to get container status \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": rpc error: code = NotFound desc = could not find container \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": container with ID starting with 6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.089225 4869 scope.go:117] "RemoveContainer" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.089768 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} err="failed to get container status \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": rpc error: code = NotFound desc = could not find container \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": container with ID starting with 42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.089821 4869 scope.go:117] "RemoveContainer" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.090180 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} err="failed to get container status \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": rpc error: code = NotFound desc = could not find container \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": container with ID starting with f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.090265 4869 scope.go:117] "RemoveContainer" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.090988 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} err="failed to get container status \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": rpc error: code = NotFound desc = could not find container \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": container with ID starting with 58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.091124 4869 scope.go:117] "RemoveContainer" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.091535 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} err="failed to get container status \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": rpc error: code = NotFound desc = could not find container \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": container with ID starting with 2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.091564 4869 scope.go:117] "RemoveContainer" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.092067 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} err="failed to get container status \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": rpc error: code = NotFound desc = could not find container \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": container with ID starting with 236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.092155 4869 scope.go:117] "RemoveContainer" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.092564 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} err="failed to get container status \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": rpc error: code = NotFound desc = could not find container \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": container with ID starting with 879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.092649 4869 scope.go:117] "RemoveContainer" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.093004 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a"} err="failed to get container status \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": rpc error: code = NotFound desc = could not find container \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": container with ID starting with dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.474656 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" path="/var/lib/kubelet/pods/2865336a-500d-43e5-a075-a9a8fa01b929/volumes" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.769587 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/2.log" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"c9be79deef99612295b7caa7dfca1612968b0e5ae16bff7d0d78a32b3e5807a1"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"8fa02ad9a4443651471568a5d67224ebf6ebcece67c3a49555b87761685be987"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"a0728cb3ce9d558c36ab033e2398d50677da9edeced8b02c52b008cf61e15c43"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"f6abfd62c36997603b140d03f8b50ff845abb9c387eb8ba76826ace576df937c"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773518 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"09fb08f5fd7c070b0c8a8b94cd9b0f840dd624f10bab8306e99ce06f4ac386ef"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"2762f0391747bceb22a017d0b2f1ac6b6f793cec083e1076db37abe1eed4dea2"} Feb 02 14:44:54 crc kubenswrapper[4869]: I0202 14:44:54.805209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"6840f38e8939d3f45764974f9e560c0780bfc4658b38bb920e707f73314d714c"} Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.823145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"94d77fc4a29ff2ad3e13b72e28a5645353aa7c282ce742e8c2988760370ef712"} Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.823998 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.824018 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.856070 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.861175 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" podStartSLOduration=7.861148851 podStartE2EDuration="7.861148851s" podCreationTimestamp="2026-02-02 14:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:44:56.858652339 +0000 UTC m=+698.503289129" watchObservedRunningTime="2026-02-02 14:44:56.861148851 +0000 UTC m=+698.505785621" Feb 02 14:44:57 crc kubenswrapper[4869]: I0202 14:44:57.829820 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:57 crc kubenswrapper[4869]: I0202 14:44:57.862490 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.186041 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.187608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.190412 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.191093 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.198184 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.216677 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.216790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.216830 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.318247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.318339 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.318358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.319311 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.325991 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.337807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.511066 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.541208 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(cd4c689348b890aa55e120acced8b3913914f1ec6556fc5c4fe3b4e1d2e23789): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.541327 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(cd4c689348b890aa55e120acced8b3913914f1ec6556fc5c4fe3b4e1d2e23789): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.541358 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(cd4c689348b890aa55e120acced8b3913914f1ec6556fc5c4fe3b4e1d2e23789): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.541448 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(cd4c689348b890aa55e120acced8b3913914f1ec6556fc5c4fe3b4e1d2e23789): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.847706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.848379 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.873429 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(474040a1c1b74cf8a215250e90fd2ade2e46dc7d86c2f49a940cefbbeeafd7d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.874150 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(474040a1c1b74cf8a215250e90fd2ade2e46dc7d86c2f49a940cefbbeeafd7d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.874204 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(474040a1c1b74cf8a215250e90fd2ade2e46dc7d86c2f49a940cefbbeeafd7d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.874277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(474040a1c1b74cf8a215250e90fd2ade2e46dc7d86c2f49a940cefbbeeafd7d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" Feb 02 14:45:04 crc kubenswrapper[4869]: I0202 14:45:04.463222 4869 scope.go:117] "RemoveContainer" containerID="9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9" Feb 02 14:45:04 crc kubenswrapper[4869]: E0202 14:45:04.464385 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-d9vfd_openshift-multus(45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0)\"" pod="openshift-multus/multus-d9vfd" podUID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" Feb 02 14:45:14 crc kubenswrapper[4869]: I0202 14:45:14.463239 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:14 crc kubenswrapper[4869]: I0202 14:45:14.464244 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:14 crc kubenswrapper[4869]: E0202 14:45:14.495128 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(85087dd2151b060448d3d5eccc886f79ec3cc14e0f169743fbc8a4636dd30c1c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:14 crc kubenswrapper[4869]: E0202 14:45:14.495225 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(85087dd2151b060448d3d5eccc886f79ec3cc14e0f169743fbc8a4636dd30c1c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:14 crc kubenswrapper[4869]: E0202 14:45:14.495255 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(85087dd2151b060448d3d5eccc886f79ec3cc14e0f169743fbc8a4636dd30c1c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:14 crc kubenswrapper[4869]: E0202 14:45:14.495312 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(85087dd2151b060448d3d5eccc886f79ec3cc14e0f169743fbc8a4636dd30c1c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.304212 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.304754 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.403752 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4"] Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.405073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.408103 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.420054 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4"] Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.444020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.444076 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.444118 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.544802 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.544912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.544990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.545624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.546053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.568173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.722115 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.751948 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(f52d8f76822de2fa1c0494d874bd5da847cdf1d4a5deff22a28972bcdc49a248): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.752067 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(f52d8f76822de2fa1c0494d874bd5da847cdf1d4a5deff22a28972bcdc49a248): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.752102 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(f52d8f76822de2fa1c0494d874bd5da847cdf1d4a5deff22a28972bcdc49a248): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.752173 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace(264a08a0-30f5-4b76-af09-b97629a44d89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace(264a08a0-30f5-4b76-af09-b97629a44d89)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(f52d8f76822de2fa1c0494d874bd5da847cdf1d4a5deff22a28972bcdc49a248): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.935648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.936364 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.959362 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(a9d06cea4b514d4f86229b09dd0bb909f10c76c30604280caf7951e155931acb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.959520 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(a9d06cea4b514d4f86229b09dd0bb909f10c76c30604280caf7951e155931acb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.959601 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(a9d06cea4b514d4f86229b09dd0bb909f10c76c30604280caf7951e155931acb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.959716 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace(264a08a0-30f5-4b76-af09-b97629a44d89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace(264a08a0-30f5-4b76-af09-b97629a44d89)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(a9d06cea4b514d4f86229b09dd0bb909f10c76c30604280caf7951e155931acb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" Feb 02 14:45:19 crc kubenswrapper[4869]: I0202 14:45:19.466081 4869 scope.go:117] "RemoveContainer" containerID="9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9" Feb 02 14:45:19 crc kubenswrapper[4869]: I0202 14:45:19.962697 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/2.log" Feb 02 14:45:19 crc kubenswrapper[4869]: I0202 14:45:19.963248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"56ecb779755ed2fcdbb7598926faae2bd7dfcd26dd50f7a81b3afee1529e398a"} Feb 02 14:45:20 crc kubenswrapper[4869]: I0202 14:45:20.262982 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:45:25 crc kubenswrapper[4869]: I0202 14:45:25.462078 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:25 crc kubenswrapper[4869]: I0202 14:45:25.464526 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:25 crc kubenswrapper[4869]: I0202 14:45:25.892978 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 14:45:25 crc kubenswrapper[4869]: W0202 14:45:25.906583 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4a6eca8_9d17_4791_add2_36c7119da5a5.slice/crio-a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea WatchSource:0}: Error finding container a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea: Status 404 returned error can't find the container with id a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea Feb 02 14:45:26 crc kubenswrapper[4869]: I0202 14:45:26.005492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" event={"ID":"f4a6eca8-9d17-4791-add2-36c7119da5a5","Type":"ContainerStarted","Data":"a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea"} Feb 02 14:45:27 crc kubenswrapper[4869]: I0202 14:45:27.018122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" event={"ID":"f4a6eca8-9d17-4791-add2-36c7119da5a5","Type":"ContainerStarted","Data":"28b9935993b50888d9171d31e34b1e8a7654cd4a7e60abd6660f4755c8d99b31"} Feb 02 14:45:28 crc kubenswrapper[4869]: I0202 14:45:28.025459 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4a6eca8-9d17-4791-add2-36c7119da5a5" containerID="28b9935993b50888d9171d31e34b1e8a7654cd4a7e60abd6660f4755c8d99b31" exitCode=0 Feb 02 14:45:28 crc kubenswrapper[4869]: I0202 14:45:28.025532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" event={"ID":"f4a6eca8-9d17-4791-add2-36c7119da5a5","Type":"ContainerDied","Data":"28b9935993b50888d9171d31e34b1e8a7654cd4a7e60abd6660f4755c8d99b31"} Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.247252 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.352288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") pod \"f4a6eca8-9d17-4791-add2-36c7119da5a5\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.352381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") pod \"f4a6eca8-9d17-4791-add2-36c7119da5a5\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.352444 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") pod \"f4a6eca8-9d17-4791-add2-36c7119da5a5\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.353967 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "f4a6eca8-9d17-4791-add2-36c7119da5a5" (UID: "f4a6eca8-9d17-4791-add2-36c7119da5a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.361004 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f4a6eca8-9d17-4791-add2-36c7119da5a5" (UID: "f4a6eca8-9d17-4791-add2-36c7119da5a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.361149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft" (OuterVolumeSpecName: "kube-api-access-djxft") pod "f4a6eca8-9d17-4791-add2-36c7119da5a5" (UID: "f4a6eca8-9d17-4791-add2-36c7119da5a5"). InnerVolumeSpecName "kube-api-access-djxft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.454497 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.454578 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.454613 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.461897 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.468420 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.681366 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4"] Feb 02 14:45:30 crc kubenswrapper[4869]: I0202 14:45:30.041726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerStarted","Data":"3d55704d4b09f212b5146fa8b98350280e9257c874ccbfd3096bb9d93f76f046"} Feb 02 14:45:30 crc kubenswrapper[4869]: I0202 14:45:30.045197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" event={"ID":"f4a6eca8-9d17-4791-add2-36c7119da5a5","Type":"ContainerDied","Data":"a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea"} Feb 02 14:45:30 crc kubenswrapper[4869]: I0202 14:45:30.045484 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea" Feb 02 14:45:30 crc kubenswrapper[4869]: I0202 14:45:30.045344 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:31 crc kubenswrapper[4869]: I0202 14:45:31.054024 4869 generic.go:334] "Generic (PLEG): container finished" podID="264a08a0-30f5-4b76-af09-b97629a44d89" containerID="dbeb3dc825ddaeab08d8880d37488299a02f6c4ff1dc855f4e1c5730b37c3cd1" exitCode=0 Feb 02 14:45:31 crc kubenswrapper[4869]: I0202 14:45:31.054210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerDied","Data":"dbeb3dc825ddaeab08d8880d37488299a02f6c4ff1dc855f4e1c5730b37c3cd1"} Feb 02 14:45:33 crc kubenswrapper[4869]: I0202 14:45:33.069647 4869 generic.go:334] "Generic (PLEG): container finished" podID="264a08a0-30f5-4b76-af09-b97629a44d89" containerID="9ec0f3627a9f2311679c1c3553aa17b3c4552ddf0042b3602aa64ae0827531d3" exitCode=0 Feb 02 14:45:33 crc kubenswrapper[4869]: I0202 14:45:33.070156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerDied","Data":"9ec0f3627a9f2311679c1c3553aa17b3c4552ddf0042b3602aa64ae0827531d3"} Feb 02 14:45:34 crc kubenswrapper[4869]: I0202 14:45:34.077758 4869 generic.go:334] "Generic (PLEG): container finished" podID="264a08a0-30f5-4b76-af09-b97629a44d89" containerID="f7b02e4164f64e068a6c2ef52f128d0be24196b740fc6632ad07b6bb50424192" exitCode=0 Feb 02 14:45:34 crc kubenswrapper[4869]: I0202 14:45:34.077819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerDied","Data":"f7b02e4164f64e068a6c2ef52f128d0be24196b740fc6632ad07b6bb50424192"} Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.320856 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.444244 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") pod \"264a08a0-30f5-4b76-af09-b97629a44d89\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.444427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") pod \"264a08a0-30f5-4b76-af09-b97629a44d89\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.444554 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") pod \"264a08a0-30f5-4b76-af09-b97629a44d89\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.446188 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle" (OuterVolumeSpecName: "bundle") pod "264a08a0-30f5-4b76-af09-b97629a44d89" (UID: "264a08a0-30f5-4b76-af09-b97629a44d89"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.454272 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5" (OuterVolumeSpecName: "kube-api-access-zj9f5") pod "264a08a0-30f5-4b76-af09-b97629a44d89" (UID: "264a08a0-30f5-4b76-af09-b97629a44d89"). InnerVolumeSpecName "kube-api-access-zj9f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.546745 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.546802 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.675083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util" (OuterVolumeSpecName: "util") pod "264a08a0-30f5-4b76-af09-b97629a44d89" (UID: "264a08a0-30f5-4b76-af09-b97629a44d89"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.750171 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:36 crc kubenswrapper[4869]: I0202 14:45:36.093783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerDied","Data":"3d55704d4b09f212b5146fa8b98350280e9257c874ccbfd3096bb9d93f76f046"} Feb 02 14:45:36 crc kubenswrapper[4869]: I0202 14:45:36.094356 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:36 crc kubenswrapper[4869]: I0202 14:45:36.094367 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d55704d4b09f212b5146fa8b98350280e9257c874ccbfd3096bb9d93f76f046" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094155 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bbvzg"] Feb 02 14:45:42 crc kubenswrapper[4869]: E0202 14:45:42.094685 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="extract" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094698 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="extract" Feb 02 14:45:42 crc kubenswrapper[4869]: E0202 14:45:42.094710 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="util" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094716 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="util" Feb 02 14:45:42 crc kubenswrapper[4869]: E0202 14:45:42.094733 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" containerName="collect-profiles" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094740 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" containerName="collect-profiles" Feb 02 14:45:42 crc kubenswrapper[4869]: E0202 14:45:42.094751 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="pull" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094757 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="pull" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094848 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" containerName="collect-profiles" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094858 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="extract" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.095339 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.099150 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ft4ld" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.099390 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.100726 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.116803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bbvzg"] Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.258710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwb8\" (UniqueName: \"kubernetes.io/projected/f417537d-ce1d-461c-afec-09d3ec96c3b4-kube-api-access-hxwb8\") pod \"nmstate-operator-646758c888-bbvzg\" (UID: \"f417537d-ce1d-461c-afec-09d3ec96c3b4\") " pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.360841 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwb8\" (UniqueName: \"kubernetes.io/projected/f417537d-ce1d-461c-afec-09d3ec96c3b4-kube-api-access-hxwb8\") pod \"nmstate-operator-646758c888-bbvzg\" (UID: \"f417537d-ce1d-461c-afec-09d3ec96c3b4\") " pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.389494 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwb8\" (UniqueName: \"kubernetes.io/projected/f417537d-ce1d-461c-afec-09d3ec96c3b4-kube-api-access-hxwb8\") pod \"nmstate-operator-646758c888-bbvzg\" (UID: \"f417537d-ce1d-461c-afec-09d3ec96c3b4\") " pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.411505 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.670040 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bbvzg"] Feb 02 14:45:43 crc kubenswrapper[4869]: I0202 14:45:43.145635 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" event={"ID":"f417537d-ce1d-461c-afec-09d3ec96c3b4","Type":"ContainerStarted","Data":"c13ef5637d3dab855332c53a9870a82b68730461e297e1d5bc7d98f2d0db85ca"} Feb 02 14:45:45 crc kubenswrapper[4869]: I0202 14:45:45.304509 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:45:45 crc kubenswrapper[4869]: I0202 14:45:45.305082 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:45:46 crc kubenswrapper[4869]: I0202 14:45:46.164227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" event={"ID":"f417537d-ce1d-461c-afec-09d3ec96c3b4","Type":"ContainerStarted","Data":"acb4e608d7cc70546f4cc78b7c4f3cd38adf113ad0c4c0da4c37da3930a0db3d"} Feb 02 14:45:46 crc kubenswrapper[4869]: I0202 14:45:46.185760 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" podStartSLOduration=1.795509509 podStartE2EDuration="4.185731211s" podCreationTimestamp="2026-02-02 14:45:42 +0000 UTC" firstStartedPulling="2026-02-02 14:45:42.674223065 +0000 UTC m=+744.318859835" lastFinishedPulling="2026-02-02 14:45:45.064444777 +0000 UTC m=+746.709081537" observedRunningTime="2026-02-02 14:45:46.183878626 +0000 UTC m=+747.828515426" watchObservedRunningTime="2026-02-02 14:45:46.185731211 +0000 UTC m=+747.830367981" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.119378 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-647lw"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.121134 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.124820 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5f6cd" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.135017 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5gp\" (UniqueName: \"kubernetes.io/projected/ec9ec105-2660-4787-89f3-5c0fe79e8e97-kube-api-access-zx5gp\") pod \"nmstate-metrics-54757c584b-647lw\" (UID: \"ec9ec105-2660-4787-89f3-5c0fe79e8e97\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.135395 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.136613 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.138880 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-647lw"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.140601 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.169415 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-87g86"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.170447 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.217973 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236447 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4fz7\" (UniqueName: \"kubernetes.io/projected/3d92c75a-462e-4ff9-8373-8d91fb2624f4-kube-api-access-t4fz7\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236573 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5gp\" (UniqueName: \"kubernetes.io/projected/ec9ec105-2660-4787-89f3-5c0fe79e8e97-kube-api-access-zx5gp\") pod \"nmstate-metrics-54757c584b-647lw\" (UID: \"ec9ec105-2660-4787-89f3-5c0fe79e8e97\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236609 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-ovs-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236642 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdbn\" (UniqueName: \"kubernetes.io/projected/bd339f13-8405-47aa-b76a-2cef40d3ec11-kube-api-access-rfdbn\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236695 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-nmstate-lock\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-dbus-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.272209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5gp\" (UniqueName: \"kubernetes.io/projected/ec9ec105-2660-4787-89f3-5c0fe79e8e97-kube-api-access-zx5gp\") pod \"nmstate-metrics-54757c584b-647lw\" (UID: \"ec9ec105-2660-4787-89f3-5c0fe79e8e97\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.274016 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.274846 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.282451 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pzplm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.282714 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.282839 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.290742 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338424 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4fz7\" (UniqueName: \"kubernetes.io/projected/3d92c75a-462e-4ff9-8373-8d91fb2624f4-kube-api-access-t4fz7\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338507 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfn7v\" (UniqueName: \"kubernetes.io/projected/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-kube-api-access-lfn7v\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338625 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-ovs-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdbn\" (UniqueName: \"kubernetes.io/projected/bd339f13-8405-47aa-b76a-2cef40d3ec11-kube-api-access-rfdbn\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-nmstate-lock\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-dbus-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338751 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-ovs-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338833 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-nmstate-lock\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.339161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-dbus-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: E0202 14:45:47.339326 4869 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 14:45:47 crc kubenswrapper[4869]: E0202 14:45:47.339413 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair podName:bd339f13-8405-47aa-b76a-2cef40d3ec11 nodeName:}" failed. No retries permitted until 2026-02-02 14:45:47.839385475 +0000 UTC m=+749.484022245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-jf287" (UID: "bd339f13-8405-47aa-b76a-2cef40d3ec11") : secret "openshift-nmstate-webhook" not found Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.359736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4fz7\" (UniqueName: \"kubernetes.io/projected/3d92c75a-462e-4ff9-8373-8d91fb2624f4-kube-api-access-t4fz7\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.360146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdbn\" (UniqueName: \"kubernetes.io/projected/bd339f13-8405-47aa-b76a-2cef40d3ec11-kube-api-access-rfdbn\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.438319 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.439986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.440095 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfn7v\" (UniqueName: \"kubernetes.io/projected/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-kube-api-access-lfn7v\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.440129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.440989 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.443581 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.470361 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfn7v\" (UniqueName: \"kubernetes.io/projected/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-kube-api-access-lfn7v\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.490073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.517600 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-865678f777-2fzjm"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.518551 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: W0202 14:45:47.532415 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d92c75a_462e_4ff9_8373_8d91fb2624f4.slice/crio-085ce1228da7a6141b144e4ff9567603c7d294580573712c0eec06d220f16fd8 WatchSource:0}: Error finding container 085ce1228da7a6141b144e4ff9567603c7d294580573712c0eec06d220f16fd8: Status 404 returned error can't find the container with id 085ce1228da7a6141b144e4ff9567603c7d294580573712c0eec06d220f16fd8 Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.534060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-865678f777-2fzjm"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545610 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545654 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-oauth-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545697 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-trusted-ca-bundle\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qkp\" (UniqueName: \"kubernetes.io/projected/272b4fd8-4ae3-4f19-a95e-1824605ae399-kube-api-access-84qkp\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-oauth-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545787 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-service-ca\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545821 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.613158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647243 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-oauth-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-trusted-ca-bundle\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647364 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qkp\" (UniqueName: \"kubernetes.io/projected/272b4fd8-4ae3-4f19-a95e-1824605ae399-kube-api-access-84qkp\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-oauth-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-service-ca\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.649007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-service-ca\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.649824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-oauth-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.649943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-trusted-ca-bundle\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.650568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.653739 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.657295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-oauth-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.669978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qkp\" (UniqueName: \"kubernetes.io/projected/272b4fd8-4ae3-4f19-a95e-1824605ae399-kube-api-access-84qkp\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.845316 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.850452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.855022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.967633 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-647lw"] Feb 02 14:45:48 crc kubenswrapper[4869]: W0202 14:45:48.048820 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9ec105_2660_4787_89f3_5c0fe79e8e97.slice/crio-7388d9f69316062c39e5346ffcd277cf51649616af47406f60ad567e8132657a WatchSource:0}: Error finding container 7388d9f69316062c39e5346ffcd277cf51649616af47406f60ad567e8132657a: Status 404 returned error can't find the container with id 7388d9f69316062c39e5346ffcd277cf51649616af47406f60ad567e8132657a Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.058719 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.126814 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x"] Feb 02 14:45:48 crc kubenswrapper[4869]: W0202 14:45:48.129134 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60ca7e15_9af2_4019_9481_39f8bc9e4ec7.slice/crio-347db200ab305de67e09cd67857bf0649a66be5e857b8cf125adbfbfa324c503 WatchSource:0}: Error finding container 347db200ab305de67e09cd67857bf0649a66be5e857b8cf125adbfbfa324c503: Status 404 returned error can't find the container with id 347db200ab305de67e09cd67857bf0649a66be5e857b8cf125adbfbfa324c503 Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.190509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" event={"ID":"60ca7e15-9af2-4019-9481-39f8bc9e4ec7","Type":"ContainerStarted","Data":"347db200ab305de67e09cd67857bf0649a66be5e857b8cf125adbfbfa324c503"} Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.191963 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-87g86" event={"ID":"3d92c75a-462e-4ff9-8373-8d91fb2624f4","Type":"ContainerStarted","Data":"085ce1228da7a6141b144e4ff9567603c7d294580573712c0eec06d220f16fd8"} Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.193413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" event={"ID":"ec9ec105-2660-4787-89f3-5c0fe79e8e97","Type":"ContainerStarted","Data":"7388d9f69316062c39e5346ffcd277cf51649616af47406f60ad567e8132657a"} Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.294582 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287"] Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.417777 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-865678f777-2fzjm"] Feb 02 14:45:49 crc kubenswrapper[4869]: I0202 14:45:49.206776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" event={"ID":"bd339f13-8405-47aa-b76a-2cef40d3ec11","Type":"ContainerStarted","Data":"949d8c0b962ddfab3414b1ba43800a57513de10d43d4a68906d4a12aa0e88898"} Feb 02 14:45:49 crc kubenswrapper[4869]: I0202 14:45:49.209663 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865678f777-2fzjm" event={"ID":"272b4fd8-4ae3-4f19-a95e-1824605ae399","Type":"ContainerStarted","Data":"7d0a72d0def9e1954932bd02c027699bcc2f0e0170223aa2ff5d374046c4657c"} Feb 02 14:45:49 crc kubenswrapper[4869]: I0202 14:45:49.209691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865678f777-2fzjm" event={"ID":"272b4fd8-4ae3-4f19-a95e-1824605ae399","Type":"ContainerStarted","Data":"e9a4b0336aa9d9bd269f0d7c1d0acd23be9d2b7b846ab4eb4b7352fb1b115fac"} Feb 02 14:45:49 crc kubenswrapper[4869]: I0202 14:45:49.243897 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-865678f777-2fzjm" podStartSLOduration=2.243867299 podStartE2EDuration="2.243867299s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:45:49.237069651 +0000 UTC m=+750.881706441" watchObservedRunningTime="2026-02-02 14:45:49.243867299 +0000 UTC m=+750.888504069" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.268636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-87g86" event={"ID":"3d92c75a-462e-4ff9-8373-8d91fb2624f4","Type":"ContainerStarted","Data":"0807e97b912b347068af22b0f6836def97bf498254497a3e9833b930a6cf14d1"} Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.269311 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.273416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" event={"ID":"ec9ec105-2660-4787-89f3-5c0fe79e8e97","Type":"ContainerStarted","Data":"8d05a0649952c134f323ae6ba387754e4a9b01acae2733778dd56021c1900585"} Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.275868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" event={"ID":"60ca7e15-9af2-4019-9481-39f8bc9e4ec7","Type":"ContainerStarted","Data":"0291c6d878063957af807812412b2174d87efad86a7f36996de7f795e1b5b967"} Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.277713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" event={"ID":"bd339f13-8405-47aa-b76a-2cef40d3ec11","Type":"ContainerStarted","Data":"153ce006075cf4c3e3bf02efdcbdfdac87f7fdf9af6f76b12f222bbade8c4d89"} Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.278131 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.292317 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-87g86" podStartSLOduration=1.485079391 podStartE2EDuration="8.292287868s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="2026-02-02 14:45:47.53450416 +0000 UTC m=+749.179140930" lastFinishedPulling="2026-02-02 14:45:54.341712637 +0000 UTC m=+755.986349407" observedRunningTime="2026-02-02 14:45:55.288433753 +0000 UTC m=+756.933070523" watchObservedRunningTime="2026-02-02 14:45:55.292287868 +0000 UTC m=+756.936924658" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.310137 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" podStartSLOduration=2.156463441 podStartE2EDuration="8.310106518s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="2026-02-02 14:45:48.305291884 +0000 UTC m=+749.949928654" lastFinishedPulling="2026-02-02 14:45:54.458934961 +0000 UTC m=+756.103571731" observedRunningTime="2026-02-02 14:45:55.305533235 +0000 UTC m=+756.950170005" watchObservedRunningTime="2026-02-02 14:45:55.310106518 +0000 UTC m=+756.954743288" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.325673 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" podStartSLOduration=2.017301066 podStartE2EDuration="8.325648002s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="2026-02-02 14:45:48.133847242 +0000 UTC m=+749.778484012" lastFinishedPulling="2026-02-02 14:45:54.442194178 +0000 UTC m=+756.086830948" observedRunningTime="2026-02-02 14:45:55.32393878 +0000 UTC m=+756.968575560" watchObservedRunningTime="2026-02-02 14:45:55.325648002 +0000 UTC m=+756.970284772" Feb 02 14:45:57 crc kubenswrapper[4869]: I0202 14:45:57.846642 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:57 crc kubenswrapper[4869]: I0202 14:45:57.847218 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:57 crc kubenswrapper[4869]: I0202 14:45:57.853079 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:58 crc kubenswrapper[4869]: I0202 14:45:58.304559 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:58 crc kubenswrapper[4869]: I0202 14:45:58.372119 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:46:01 crc kubenswrapper[4869]: I0202 14:46:01.336055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" event={"ID":"ec9ec105-2660-4787-89f3-5c0fe79e8e97","Type":"ContainerStarted","Data":"9bce820cfabdf958cea11a870204d457ffe3a16ab6a4bccdac0b0902d805f290"} Feb 02 14:46:02 crc kubenswrapper[4869]: I0202 14:46:02.373012 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" podStartSLOduration=3.594255048 podStartE2EDuration="15.372989796s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="2026-02-02 14:45:48.056008692 +0000 UTC m=+749.700645462" lastFinishedPulling="2026-02-02 14:45:59.83474344 +0000 UTC m=+761.479380210" observedRunningTime="2026-02-02 14:46:02.371838897 +0000 UTC m=+764.016475667" watchObservedRunningTime="2026-02-02 14:46:02.372989796 +0000 UTC m=+764.017626566" Feb 02 14:46:02 crc kubenswrapper[4869]: I0202 14:46:02.521656 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:46:08 crc kubenswrapper[4869]: I0202 14:46:08.066944 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.304897 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.305776 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.305865 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.306932 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.307101 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486" gracePeriod=600 Feb 02 14:46:16 crc kubenswrapper[4869]: I0202 14:46:16.460168 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486" exitCode=0 Feb 02 14:46:16 crc kubenswrapper[4869]: I0202 14:46:16.460282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486"} Feb 02 14:46:16 crc kubenswrapper[4869]: I0202 14:46:16.460959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56"} Feb 02 14:46:16 crc kubenswrapper[4869]: I0202 14:46:16.460996 4869 scope.go:117] "RemoveContainer" containerID="995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.416816 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ptmkd" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" containerID="cri-o://4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" gracePeriod=15 Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.875726 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ptmkd_ccaee1bd-fef5-4874-9e96-002a733fd5dc/console/0.log" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.875813 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948365 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948464 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948489 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948527 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948563 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948650 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948690 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.949845 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config" (OuterVolumeSpecName: "console-config") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950080 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950104 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950437 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950470 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950483 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950599 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca" (OuterVolumeSpecName: "service-ca") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.959139 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.960721 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.961229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf" (OuterVolumeSpecName: "kube-api-access-wbgxf") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "kube-api-access-wbgxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.051697 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.051754 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.051765 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.051775 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.349106 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx"] Feb 02 14:46:24 crc kubenswrapper[4869]: E0202 14:46:24.349937 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.349957 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.350116 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.351326 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.354245 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.357490 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx"] Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.457691 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.457970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.458040 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521430 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ptmkd_ccaee1bd-fef5-4874-9e96-002a733fd5dc/console/0.log" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521508 4869 generic.go:334] "Generic (PLEG): container finished" podID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerID="4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" exitCode=2 Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521553 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ptmkd" event={"ID":"ccaee1bd-fef5-4874-9e96-002a733fd5dc","Type":"ContainerDied","Data":"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed"} Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521601 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ptmkd" event={"ID":"ccaee1bd-fef5-4874-9e96-002a733fd5dc","Type":"ContainerDied","Data":"16f76cd6bf05f6fb4f402ecc35e901805472a099619bf8e10a27be6e93584f89"} Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521637 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521637 4869 scope.go:117] "RemoveContainer" containerID="4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.561384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.561473 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.561554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.563983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.565703 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.575393 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.580042 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.583791 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.673115 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.850274 4869 scope.go:117] "RemoveContainer" containerID="4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" Feb 02 14:46:24 crc kubenswrapper[4869]: E0202 14:46:24.850948 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed\": container with ID starting with 4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed not found: ID does not exist" containerID="4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.851013 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed"} err="failed to get container status \"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed\": rpc error: code = NotFound desc = could not find container \"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed\": container with ID starting with 4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed not found: ID does not exist" Feb 02 14:46:25 crc kubenswrapper[4869]: I0202 14:46:25.087659 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx"] Feb 02 14:46:25 crc kubenswrapper[4869]: I0202 14:46:25.121349 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 14:46:25 crc kubenswrapper[4869]: I0202 14:46:25.475248 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" path="/var/lib/kubelet/pods/ccaee1bd-fef5-4874-9e96-002a733fd5dc/volumes" Feb 02 14:46:25 crc kubenswrapper[4869]: I0202 14:46:25.528602 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerStarted","Data":"c817e41703b8fc035a2f1079427307f969158b9aa17b598a01e4601d00e56c10"} Feb 02 14:46:26 crc kubenswrapper[4869]: I0202 14:46:26.536378 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerStarted","Data":"f4d481a024f73f0f6b84ffe7965dab28071c8e977b999b0abc73b835eee8dca6"} Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.544176 4869 generic.go:334] "Generic (PLEG): container finished" podID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerID="f4d481a024f73f0f6b84ffe7965dab28071c8e977b999b0abc73b835eee8dca6" exitCode=0 Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.544606 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerDied","Data":"f4d481a024f73f0f6b84ffe7965dab28071c8e977b999b0abc73b835eee8dca6"} Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.701332 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.703058 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.709865 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.810238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.810296 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.810318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.911267 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.911326 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.911348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.911965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.912002 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.942855 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:28 crc kubenswrapper[4869]: I0202 14:46:28.024725 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:28 crc kubenswrapper[4869]: I0202 14:46:28.305484 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:28 crc kubenswrapper[4869]: W0202 14:46:28.331888 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba11fdd_6b64_41ad_9106_0eda21b92a5a.slice/crio-5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b WatchSource:0}: Error finding container 5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b: Status 404 returned error can't find the container with id 5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b Feb 02 14:46:28 crc kubenswrapper[4869]: I0202 14:46:28.551593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerStarted","Data":"5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b"} Feb 02 14:46:28 crc kubenswrapper[4869]: E0202 14:46:28.724412 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba11fdd_6b64_41ad_9106_0eda21b92a5a.slice/crio-386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba11fdd_6b64_41ad_9106_0eda21b92a5a.slice/crio-conmon-386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4.scope\": RecentStats: unable to find data in memory cache]" Feb 02 14:46:29 crc kubenswrapper[4869]: I0202 14:46:29.560362 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerID="386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4" exitCode=0 Feb 02 14:46:29 crc kubenswrapper[4869]: I0202 14:46:29.560444 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerDied","Data":"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4"} Feb 02 14:46:33 crc kubenswrapper[4869]: I0202 14:46:33.588077 4869 generic.go:334] "Generic (PLEG): container finished" podID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerID="63faa96ad77efc4aa0694b4e352025f2e43421a504a4556a806a0f787868c946" exitCode=0 Feb 02 14:46:33 crc kubenswrapper[4869]: I0202 14:46:33.588117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerDied","Data":"63faa96ad77efc4aa0694b4e352025f2e43421a504a4556a806a0f787868c946"} Feb 02 14:46:34 crc kubenswrapper[4869]: I0202 14:46:34.598028 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerStarted","Data":"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892"} Feb 02 14:46:34 crc kubenswrapper[4869]: I0202 14:46:34.601639 4869 generic.go:334] "Generic (PLEG): container finished" podID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerID="c9a4318758ed809be7787ae40078b0d811fd88f4892994d6c22c406e0867bbb4" exitCode=0 Feb 02 14:46:34 crc kubenswrapper[4869]: I0202 14:46:34.601704 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerDied","Data":"c9a4318758ed809be7787ae40078b0d811fd88f4892994d6c22c406e0867bbb4"} Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.610890 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerID="a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892" exitCode=0 Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.611045 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerDied","Data":"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892"} Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.882243 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.936329 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") pod \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.936418 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") pod \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.936585 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") pod \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.937432 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle" (OuterVolumeSpecName: "bundle") pod "861ed901-c46c-49d9-83ad-aeca9fd3f93b" (UID: "861ed901-c46c-49d9-83ad-aeca9fd3f93b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.942867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc" (OuterVolumeSpecName: "kube-api-access-s9fxc") pod "861ed901-c46c-49d9-83ad-aeca9fd3f93b" (UID: "861ed901-c46c-49d9-83ad-aeca9fd3f93b"). InnerVolumeSpecName "kube-api-access-s9fxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.949146 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util" (OuterVolumeSpecName: "util") pod "861ed901-c46c-49d9-83ad-aeca9fd3f93b" (UID: "861ed901-c46c-49d9-83ad-aeca9fd3f93b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.038098 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.038155 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.038165 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.622031 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerDied","Data":"c817e41703b8fc035a2f1079427307f969158b9aa17b598a01e4601d00e56c10"} Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.622102 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c817e41703b8fc035a2f1079427307f969158b9aa17b598a01e4601d00e56c10" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.622139 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.624828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerStarted","Data":"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd"} Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.663185 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68hxt" podStartSLOduration=3.080218844 podStartE2EDuration="9.663156234s" podCreationTimestamp="2026-02-02 14:46:27 +0000 UTC" firstStartedPulling="2026-02-02 14:46:29.562265743 +0000 UTC m=+791.206902513" lastFinishedPulling="2026-02-02 14:46:36.145203133 +0000 UTC m=+797.789839903" observedRunningTime="2026-02-02 14:46:36.652999772 +0000 UTC m=+798.297636542" watchObservedRunningTime="2026-02-02 14:46:36.663156234 +0000 UTC m=+798.307793004" Feb 02 14:46:38 crc kubenswrapper[4869]: I0202 14:46:38.025262 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:38 crc kubenswrapper[4869]: I0202 14:46:38.026579 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:39 crc kubenswrapper[4869]: I0202 14:46:39.078370 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68hxt" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" probeResult="failure" output=< Feb 02 14:46:39 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 14:46:39 crc kubenswrapper[4869]: > Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.725155 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p"] Feb 02 14:46:47 crc kubenswrapper[4869]: E0202 14:46:47.726433 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="pull" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.726454 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="pull" Feb 02 14:46:47 crc kubenswrapper[4869]: E0202 14:46:47.726470 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="extract" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.726478 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="extract" Feb 02 14:46:47 crc kubenswrapper[4869]: E0202 14:46:47.726494 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="util" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.726502 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="util" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.726641 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="extract" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.727282 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.729892 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.730080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pdfd4" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.730273 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.731428 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.732997 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.748618 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p"] Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.832116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2vt\" (UniqueName: \"kubernetes.io/projected/7a0708ec-3eb5-4515-adf0-e36c732da54e-kube-api-access-vh2vt\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.832197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-apiservice-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.832231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-webhook-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.933273 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2vt\" (UniqueName: \"kubernetes.io/projected/7a0708ec-3eb5-4515-adf0-e36c732da54e-kube-api-access-vh2vt\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.933341 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-apiservice-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.933362 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-webhook-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.941604 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-apiservice-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.949716 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-webhook-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.954174 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2vt\" (UniqueName: \"kubernetes.io/projected/7a0708ec-3eb5-4515-adf0-e36c732da54e-kube-api-access-vh2vt\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.063123 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-69b678c656-9prhr"] Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.064024 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.067415 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.068634 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6gg9v" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.070099 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.080328 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.089762 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69b678c656-9prhr"] Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.120956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.140416 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-apiservice-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.140476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-webhook-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.140526 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp599\" (UniqueName: \"kubernetes.io/projected/322f75dd-f952-451d-b505-400b173b382c-kube-api-access-gp599\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.218292 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.243250 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-webhook-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.243925 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp599\" (UniqueName: \"kubernetes.io/projected/322f75dd-f952-451d-b505-400b173b382c-kube-api-access-gp599\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.244553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-apiservice-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.250746 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-apiservice-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.266335 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-webhook-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.301182 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp599\" (UniqueName: \"kubernetes.io/projected/322f75dd-f952-451d-b505-400b173b382c-kube-api-access-gp599\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.384313 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.555942 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p"] Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.714040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" event={"ID":"7a0708ec-3eb5-4515-adf0-e36c732da54e","Type":"ContainerStarted","Data":"68c346e18c5d1bd57b9cd380e7e7089ecfcc535d6384dbd95b433e49e0f388f6"} Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.746366 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69b678c656-9prhr"] Feb 02 14:46:48 crc kubenswrapper[4869]: W0202 14:46:48.765417 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod322f75dd_f952_451d_b505_400b173b382c.slice/crio-ee4b30362594d16cfe5147b688410c40d6c7baba258434fb9169b7b947078e14 WatchSource:0}: Error finding container ee4b30362594d16cfe5147b688410c40d6c7baba258434fb9169b7b947078e14: Status 404 returned error can't find the container with id ee4b30362594d16cfe5147b688410c40d6c7baba258434fb9169b7b947078e14 Feb 02 14:46:49 crc kubenswrapper[4869]: I0202 14:46:49.721712 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" event={"ID":"322f75dd-f952-451d-b505-400b173b382c","Type":"ContainerStarted","Data":"ee4b30362594d16cfe5147b688410c40d6c7baba258434fb9169b7b947078e14"} Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.081205 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.081535 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68hxt" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" containerID="cri-o://412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" gracePeriod=2 Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.567354 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.696162 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") pod \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.701082 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") pod \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.701139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") pod \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.702283 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities" (OuterVolumeSpecName: "utilities") pod "1ba11fdd-6b64-41ad-9106-0eda21b92a5a" (UID: "1ba11fdd-6b64-41ad-9106-0eda21b92a5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.723217 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767" (OuterVolumeSpecName: "kube-api-access-h4767") pod "1ba11fdd-6b64-41ad-9106-0eda21b92a5a" (UID: "1ba11fdd-6b64-41ad-9106-0eda21b92a5a"). InnerVolumeSpecName "kube-api-access-h4767". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.738742 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerID="412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" exitCode=0 Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.738839 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerDied","Data":"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd"} Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.738933 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerDied","Data":"5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b"} Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.738963 4869 scope.go:117] "RemoveContainer" containerID="412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.739059 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.802814 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.802865 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.875638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ba11fdd-6b64-41ad-9106-0eda21b92a5a" (UID: "1ba11fdd-6b64-41ad-9106-0eda21b92a5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.904324 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:51 crc kubenswrapper[4869]: I0202 14:46:51.078527 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:51 crc kubenswrapper[4869]: I0202 14:46:51.083665 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:51 crc kubenswrapper[4869]: I0202 14:46:51.473964 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" path="/var/lib/kubelet/pods/1ba11fdd-6b64-41ad-9106-0eda21b92a5a/volumes" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.079265 4869 scope.go:117] "RemoveContainer" containerID="a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.103241 4869 scope.go:117] "RemoveContainer" containerID="386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.133925 4869 scope.go:117] "RemoveContainer" containerID="412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" Feb 02 14:46:52 crc kubenswrapper[4869]: E0202 14:46:52.135628 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd\": container with ID starting with 412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd not found: ID does not exist" containerID="412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.135827 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd"} err="failed to get container status \"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd\": rpc error: code = NotFound desc = could not find container \"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd\": container with ID starting with 412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd not found: ID does not exist" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.136004 4869 scope.go:117] "RemoveContainer" containerID="a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892" Feb 02 14:46:52 crc kubenswrapper[4869]: E0202 14:46:52.136844 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892\": container with ID starting with a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892 not found: ID does not exist" containerID="a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.136903 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892"} err="failed to get container status \"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892\": rpc error: code = NotFound desc = could not find container \"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892\": container with ID starting with a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892 not found: ID does not exist" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.136981 4869 scope.go:117] "RemoveContainer" containerID="386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4" Feb 02 14:46:52 crc kubenswrapper[4869]: E0202 14:46:52.137459 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4\": container with ID starting with 386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4 not found: ID does not exist" containerID="386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.137538 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4"} err="failed to get container status \"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4\": rpc error: code = NotFound desc = could not find container \"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4\": container with ID starting with 386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4 not found: ID does not exist" Feb 02 14:46:55 crc kubenswrapper[4869]: I0202 14:46:55.788100 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" event={"ID":"7a0708ec-3eb5-4515-adf0-e36c732da54e","Type":"ContainerStarted","Data":"8f899c60dacec5159f394efda1af763411c50d72d4cb2359d84cfdc989055fdb"} Feb 02 14:46:55 crc kubenswrapper[4869]: I0202 14:46:55.788826 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:55 crc kubenswrapper[4869]: I0202 14:46:55.813653 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" podStartSLOduration=2.835360931 podStartE2EDuration="8.813623569s" podCreationTimestamp="2026-02-02 14:46:47 +0000 UTC" firstStartedPulling="2026-02-02 14:46:48.569724747 +0000 UTC m=+810.214361517" lastFinishedPulling="2026-02-02 14:46:54.547987385 +0000 UTC m=+816.192624155" observedRunningTime="2026-02-02 14:46:55.811836945 +0000 UTC m=+817.456473715" watchObservedRunningTime="2026-02-02 14:46:55.813623569 +0000 UTC m=+817.458260329" Feb 02 14:46:56 crc kubenswrapper[4869]: I0202 14:46:56.797209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" event={"ID":"322f75dd-f952-451d-b505-400b173b382c","Type":"ContainerStarted","Data":"04b0a0f2a1283c9d50cb479ef5acca4afdeb272896bf39d7368f676a48ea372a"} Feb 02 14:46:56 crc kubenswrapper[4869]: I0202 14:46:56.798082 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:56 crc kubenswrapper[4869]: I0202 14:46:56.826843 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" podStartSLOduration=1.529894039 podStartE2EDuration="8.826811899s" podCreationTimestamp="2026-02-02 14:46:48 +0000 UTC" firstStartedPulling="2026-02-02 14:46:48.768927027 +0000 UTC m=+810.413563797" lastFinishedPulling="2026-02-02 14:46:56.065844887 +0000 UTC m=+817.710481657" observedRunningTime="2026-02-02 14:46:56.82280542 +0000 UTC m=+818.467442210" watchObservedRunningTime="2026-02-02 14:46:56.826811899 +0000 UTC m=+818.471448669" Feb 02 14:47:08 crc kubenswrapper[4869]: I0202 14:47:08.391218 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.084298 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.808601 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jrfvv"] Feb 02 14:47:28 crc kubenswrapper[4869]: E0202 14:47:28.815318 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.815372 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" Feb 02 14:47:28 crc kubenswrapper[4869]: E0202 14:47:28.815403 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="extract-utilities" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.815412 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="extract-utilities" Feb 02 14:47:28 crc kubenswrapper[4869]: E0202 14:47:28.815430 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="extract-content" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.815439 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="extract-content" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.815689 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.818208 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777"] Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.818931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.819552 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.825263 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.825327 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777"] Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.825627 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-69bkb" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.826250 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.838393 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.932999 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qkkx4"] Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.934396 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qkkx4" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.940414 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.940792 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.940895 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.941111 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mmdlj" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.949524 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-45hcg"] Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.950863 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.952591 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.968263 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-45hcg"] Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000340 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-reloader\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-startup\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jmd\" (UniqueName: \"kubernetes.io/projected/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-kube-api-access-q8jmd\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdshz\" (UniqueName: \"kubernetes.io/projected/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-kube-api-access-qdshz\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000822 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-conf\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000885 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000973 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001012 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-sockets\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z6nl\" (UniqueName: \"kubernetes.io/projected/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-kube-api-access-2z6nl\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001240 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics-certs\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001289 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-cert\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.101902 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-cert\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbdct\" (UniqueName: \"kubernetes.io/projected/131f6807-e412-436c-8271-86f09259ae74-kube-api-access-bbdct\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-reloader\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102115 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-startup\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102141 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jmd\" (UniqueName: \"kubernetes.io/projected/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-kube-api-access-q8jmd\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102180 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdshz\" (UniqueName: \"kubernetes.io/projected/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-kube-api-access-qdshz\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-conf\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102238 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102276 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102304 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102369 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-sockets\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102407 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z6nl\" (UniqueName: \"kubernetes.io/projected/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-kube-api-access-2z6nl\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102436 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/131f6807-e412-436c-8271-86f09259ae74-metallb-excludel2\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102507 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics-certs\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.103005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-conf\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.103064 4869 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.103325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-reloader\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.103607 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs podName:fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188 nodeName:}" failed. No retries permitted until 2026-02-02 14:47:29.603497616 +0000 UTC m=+851.248134576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs") pod "controller-6968d8fdc4-45hcg" (UID: "fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188") : secret "controller-certs-secret" not found Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.104600 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-startup\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.105010 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.105811 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.106131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-sockets\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.112356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics-certs\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.119263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-cert\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.124263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.124414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdshz\" (UniqueName: \"kubernetes.io/projected/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-kube-api-access-qdshz\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.128655 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z6nl\" (UniqueName: \"kubernetes.io/projected/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-kube-api-access-2z6nl\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.140948 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jmd\" (UniqueName: \"kubernetes.io/projected/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-kube-api-access-q8jmd\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.147628 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.163080 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.204568 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.204646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbdct\" (UniqueName: \"kubernetes.io/projected/131f6807-e412-436c-8271-86f09259ae74-kube-api-access-bbdct\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.204711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.204770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/131f6807-e412-436c-8271-86f09259ae74-metallb-excludel2\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.204835 4869 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.204977 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs podName:131f6807-e412-436c-8271-86f09259ae74 nodeName:}" failed. No retries permitted until 2026-02-02 14:47:29.70494291 +0000 UTC m=+851.349579680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs") pod "speaker-qkkx4" (UID: "131f6807-e412-436c-8271-86f09259ae74") : secret "speaker-certs-secret" not found Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.205288 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.205422 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist podName:131f6807-e412-436c-8271-86f09259ae74 nodeName:}" failed. No retries permitted until 2026-02-02 14:47:29.705394691 +0000 UTC m=+851.350031661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist") pod "speaker-qkkx4" (UID: "131f6807-e412-436c-8271-86f09259ae74") : secret "metallb-memberlist" not found Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.207428 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/131f6807-e412-436c-8271-86f09259ae74-metallb-excludel2\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.227926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbdct\" (UniqueName: \"kubernetes.io/projected/131f6807-e412-436c-8271-86f09259ae74-kube-api-access-bbdct\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.442205 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777"] Feb 02 14:47:29 crc kubenswrapper[4869]: W0202 14:47:29.452410 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd389ca1e_a7e0_4a90_ae8a_f4d760b1ab1c.slice/crio-bd844187c2d99ee5744ea259ed284625cd3e4a469a6f0864d02a234e2644e10c WatchSource:0}: Error finding container bd844187c2d99ee5744ea259ed284625cd3e4a469a6f0864d02a234e2644e10c: Status 404 returned error can't find the container with id bd844187c2d99ee5744ea259ed284625cd3e4a469a6f0864d02a234e2644e10c Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.613757 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.624693 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.716083 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.716252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.716338 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.716490 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist podName:131f6807-e412-436c-8271-86f09259ae74 nodeName:}" failed. No retries permitted until 2026-02-02 14:47:30.716464527 +0000 UTC m=+852.361101297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist") pod "speaker-qkkx4" (UID: "131f6807-e412-436c-8271-86f09259ae74") : secret "metallb-memberlist" not found Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.735099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.901138 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.033988 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"4be9a11b4f47d48af15104dac4c9951616657a8e24ee88d0dbe4177eb1125173"} Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.038636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" event={"ID":"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c","Type":"ContainerStarted","Data":"bd844187c2d99ee5744ea259ed284625cd3e4a469a6f0864d02a234e2644e10c"} Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.157310 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-45hcg"] Feb 02 14:47:30 crc kubenswrapper[4869]: W0202 14:47:30.168602 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb7d0f1f_ea38_4756_b1fa_5fba1cc1a188.slice/crio-63d1b29a0db49d6b0b7833b543e77719c9fb380dabce864f0f4707e6c48f7931 WatchSource:0}: Error finding container 63d1b29a0db49d6b0b7833b543e77719c9fb380dabce864f0f4707e6c48f7931: Status 404 returned error can't find the container with id 63d1b29a0db49d6b0b7833b543e77719c9fb380dabce864f0f4707e6c48f7931 Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.743529 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.761169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.787057 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qkkx4" Feb 02 14:47:30 crc kubenswrapper[4869]: W0202 14:47:30.852265 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod131f6807_e412_436c_8271_86f09259ae74.slice/crio-03cf88c3548d7f0fa934e2969a96b4fffa22c5c2223788e653ca02547a96df88 WatchSource:0}: Error finding container 03cf88c3548d7f0fa934e2969a96b4fffa22c5c2223788e653ca02547a96df88: Status 404 returned error can't find the container with id 03cf88c3548d7f0fa934e2969a96b4fffa22c5c2223788e653ca02547a96df88 Feb 02 14:47:31 crc kubenswrapper[4869]: I0202 14:47:31.059429 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-45hcg" event={"ID":"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188","Type":"ContainerStarted","Data":"00642b31af6a0d04cad645260ade532717bb2a1142bfe032bed0eb570ce64210"} Feb 02 14:47:31 crc kubenswrapper[4869]: I0202 14:47:31.059507 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-45hcg" event={"ID":"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188","Type":"ContainerStarted","Data":"63d1b29a0db49d6b0b7833b543e77719c9fb380dabce864f0f4707e6c48f7931"} Feb 02 14:47:31 crc kubenswrapper[4869]: I0202 14:47:31.061155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qkkx4" event={"ID":"131f6807-e412-436c-8271-86f09259ae74","Type":"ContainerStarted","Data":"03cf88c3548d7f0fa934e2969a96b4fffa22c5c2223788e653ca02547a96df88"} Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.082509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qkkx4" event={"ID":"131f6807-e412-436c-8271-86f09259ae74","Type":"ContainerStarted","Data":"074ebe04aab5f18b86421f3553ba4f1b66f1b7c8c1b2cf7b2ff5980580c4ad8f"} Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.083025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qkkx4" event={"ID":"131f6807-e412-436c-8271-86f09259ae74","Type":"ContainerStarted","Data":"6e799a6f5ff21b0680fde73130a9ad0f1e73506fcfdf54e14761a395bf73792f"} Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.084376 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qkkx4" Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.086350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-45hcg" event={"ID":"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188","Type":"ContainerStarted","Data":"769b275e637f6ab07ba74b759f6913ff9252bcd410d7484a20f676eb104d15ce"} Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.086975 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.118466 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qkkx4" podStartSLOduration=4.118441095 podStartE2EDuration="4.118441095s" podCreationTimestamp="2026-02-02 14:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:47:32.113152584 +0000 UTC m=+853.757789364" watchObservedRunningTime="2026-02-02 14:47:32.118441095 +0000 UTC m=+853.763077875" Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.138418 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-45hcg" podStartSLOduration=4.13839152 podStartE2EDuration="4.13839152s" podCreationTimestamp="2026-02-02 14:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:47:32.136479442 +0000 UTC m=+853.781116212" watchObservedRunningTime="2026-02-02 14:47:32.13839152 +0000 UTC m=+853.783028290" Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.202706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" event={"ID":"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c","Type":"ContainerStarted","Data":"3a9fcbc52cad7510cb70dd987494f6397abfffdfd750c71c7ebb5e5e38ee0c88"} Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.203534 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.206677 4869 generic.go:334] "Generic (PLEG): container finished" podID="4c02ed66-22a0-4bd3-b10b-8dbf872aac9d" containerID="1fa6f83a598986d828dad7af3c1b8fb05cc86b744229126c509170bfb725ed2a" exitCode=0 Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.206785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerDied","Data":"1fa6f83a598986d828dad7af3c1b8fb05cc86b744229126c509170bfb725ed2a"} Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.261451 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" podStartSLOduration=2.577562387 podStartE2EDuration="12.261420512s" podCreationTimestamp="2026-02-02 14:47:28 +0000 UTC" firstStartedPulling="2026-02-02 14:47:29.455161261 +0000 UTC m=+851.099798041" lastFinishedPulling="2026-02-02 14:47:39.139019396 +0000 UTC m=+860.783656166" observedRunningTime="2026-02-02 14:47:40.222305393 +0000 UTC m=+861.866942173" watchObservedRunningTime="2026-02-02 14:47:40.261420512 +0000 UTC m=+861.906057282" Feb 02 14:47:41 crc kubenswrapper[4869]: I0202 14:47:41.217072 4869 generic.go:334] "Generic (PLEG): container finished" podID="4c02ed66-22a0-4bd3-b10b-8dbf872aac9d" containerID="3b4a0df8763afebb1c377d1f4234d7e5f4ab5bfd96c2454f3d31647c7d282221" exitCode=0 Feb 02 14:47:41 crc kubenswrapper[4869]: I0202 14:47:41.217179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerDied","Data":"3b4a0df8763afebb1c377d1f4234d7e5f4ab5bfd96c2454f3d31647c7d282221"} Feb 02 14:47:42 crc kubenswrapper[4869]: I0202 14:47:42.226739 4869 generic.go:334] "Generic (PLEG): container finished" podID="4c02ed66-22a0-4bd3-b10b-8dbf872aac9d" containerID="d76f1ba917db524b828f430cdf069445b7b05471641b2c36ea8fbe07ddc380b9" exitCode=0 Feb 02 14:47:42 crc kubenswrapper[4869]: I0202 14:47:42.226803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerDied","Data":"d76f1ba917db524b828f430cdf069445b7b05471641b2c36ea8fbe07ddc380b9"} Feb 02 14:47:43 crc kubenswrapper[4869]: I0202 14:47:43.237410 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"da2535e0141c2157dbef7093fed584254d5d234146c1c0b6f1ae2361e87b76f8"} Feb 02 14:47:43 crc kubenswrapper[4869]: I0202 14:47:43.238262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"71595c7fcd4f46a3f64b2f9ec09d35f68c5ef947592469fc2fa24c2fbd7ca480"} Feb 02 14:47:43 crc kubenswrapper[4869]: I0202 14:47:43.238279 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"d98c315bdc13f1a58a1254bd61d0a1bb4d1abaab149127f6d0319e5de022553e"} Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.252310 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"485cf16872161489184c324a1394499c5acc4ffe32b9734cdf6e654da673fe76"} Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.252813 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.252826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"408d47df808c819c14ef45dd47d4aa75d69381ab1e7b60e8157b7a9a7c780529"} Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.252853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"faa2b27d97c863017eb6e7fca4e94e918076e5240ccd4c276c074c6c7641d161"} Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.283080 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jrfvv" podStartSLOduration=6.522341301 podStartE2EDuration="16.283063301s" podCreationTimestamp="2026-02-02 14:47:28 +0000 UTC" firstStartedPulling="2026-02-02 14:47:29.35988299 +0000 UTC m=+851.004519760" lastFinishedPulling="2026-02-02 14:47:39.12060499 +0000 UTC m=+860.765241760" observedRunningTime="2026-02-02 14:47:44.278468187 +0000 UTC m=+865.923104967" watchObservedRunningTime="2026-02-02 14:47:44.283063301 +0000 UTC m=+865.927700071" Feb 02 14:47:49 crc kubenswrapper[4869]: I0202 14:47:49.153104 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:49 crc kubenswrapper[4869]: I0202 14:47:49.163416 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:49 crc kubenswrapper[4869]: I0202 14:47:49.214754 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:49 crc kubenswrapper[4869]: I0202 14:47:49.906127 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:50 crc kubenswrapper[4869]: I0202 14:47:50.790734 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qkkx4" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.546589 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.547776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.550571 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8vzf2" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.551023 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.551239 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.565485 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.719752 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") pod \"openstack-operator-index-r4p87\" (UID: \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\") " pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.821515 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") pod \"openstack-operator-index-r4p87\" (UID: \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\") " pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.845414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") pod \"openstack-operator-index-r4p87\" (UID: \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\") " pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.875971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:54 crc kubenswrapper[4869]: I0202 14:47:54.096189 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:54 crc kubenswrapper[4869]: I0202 14:47:54.352829 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4p87" event={"ID":"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1","Type":"ContainerStarted","Data":"918bcb6635635cce2f73c2bf4aec94e06042c6edd128095de0e0218ebcac74d2"} Feb 02 14:47:56 crc kubenswrapper[4869]: I0202 14:47:56.919400 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.379531 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4p87" event={"ID":"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1","Type":"ContainerStarted","Data":"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9"} Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.399738 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-r4p87" podStartSLOduration=2.25181536 podStartE2EDuration="4.399716131s" podCreationTimestamp="2026-02-02 14:47:53 +0000 UTC" firstStartedPulling="2026-02-02 14:47:54.109045069 +0000 UTC m=+875.753681839" lastFinishedPulling="2026-02-02 14:47:56.25694584 +0000 UTC m=+877.901582610" observedRunningTime="2026-02-02 14:47:57.395841425 +0000 UTC m=+879.040478205" watchObservedRunningTime="2026-02-02 14:47:57.399716131 +0000 UTC m=+879.044352901" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.521755 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g2t6v"] Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.522770 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.542320 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g2t6v"] Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.685952 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hp9d\" (UniqueName: \"kubernetes.io/projected/39ba26b8-85bb-43c8-80cb-c9523ba9cac7-kube-api-access-4hp9d\") pod \"openstack-operator-index-g2t6v\" (UID: \"39ba26b8-85bb-43c8-80cb-c9523ba9cac7\") " pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.787561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hp9d\" (UniqueName: \"kubernetes.io/projected/39ba26b8-85bb-43c8-80cb-c9523ba9cac7-kube-api-access-4hp9d\") pod \"openstack-operator-index-g2t6v\" (UID: \"39ba26b8-85bb-43c8-80cb-c9523ba9cac7\") " pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.815568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hp9d\" (UniqueName: \"kubernetes.io/projected/39ba26b8-85bb-43c8-80cb-c9523ba9cac7-kube-api-access-4hp9d\") pod \"openstack-operator-index-g2t6v\" (UID: \"39ba26b8-85bb-43c8-80cb-c9523ba9cac7\") " pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.871239 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:58 crc kubenswrapper[4869]: I0202 14:47:58.305539 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g2t6v"] Feb 02 14:47:58 crc kubenswrapper[4869]: W0202 14:47:58.320072 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39ba26b8_85bb_43c8_80cb_c9523ba9cac7.slice/crio-83e6220a04eef0d13cb0f1e66c28b59ea6b9eed9e078e7956720c9f2c22f2647 WatchSource:0}: Error finding container 83e6220a04eef0d13cb0f1e66c28b59ea6b9eed9e078e7956720c9f2c22f2647: Status 404 returned error can't find the container with id 83e6220a04eef0d13cb0f1e66c28b59ea6b9eed9e078e7956720c9f2c22f2647 Feb 02 14:47:58 crc kubenswrapper[4869]: I0202 14:47:58.388184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g2t6v" event={"ID":"39ba26b8-85bb-43c8-80cb-c9523ba9cac7","Type":"ContainerStarted","Data":"83e6220a04eef0d13cb0f1e66c28b59ea6b9eed9e078e7956720c9f2c22f2647"} Feb 02 14:47:58 crc kubenswrapper[4869]: I0202 14:47:58.388348 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-r4p87" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerName="registry-server" containerID="cri-o://c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" gracePeriod=2 Feb 02 14:47:58 crc kubenswrapper[4869]: I0202 14:47:58.933559 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.024993 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") pod \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\" (UID: \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\") " Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.043396 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s" (OuterVolumeSpecName: "kube-api-access-bnj2s") pod "a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" (UID: "a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1"). InnerVolumeSpecName "kube-api-access-bnj2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.126852 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") on node \"crc\" DevicePath \"\"" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.167292 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.398781 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerID="c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" exitCode=0 Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.398858 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.398902 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4p87" event={"ID":"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1","Type":"ContainerDied","Data":"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9"} Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.399006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4p87" event={"ID":"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1","Type":"ContainerDied","Data":"918bcb6635635cce2f73c2bf4aec94e06042c6edd128095de0e0218ebcac74d2"} Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.399041 4869 scope.go:117] "RemoveContainer" containerID="c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.401104 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g2t6v" event={"ID":"39ba26b8-85bb-43c8-80cb-c9523ba9cac7","Type":"ContainerStarted","Data":"a0b9a3526aed27a96592bba14976a41a65dfdb4702fa4415184f8d02c078df0f"} Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.417551 4869 scope.go:117] "RemoveContainer" containerID="c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" Feb 02 14:47:59 crc kubenswrapper[4869]: E0202 14:47:59.418784 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9\": container with ID starting with c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9 not found: ID does not exist" containerID="c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.418832 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9"} err="failed to get container status \"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9\": rpc error: code = NotFound desc = could not find container \"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9\": container with ID starting with c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9 not found: ID does not exist" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.439317 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g2t6v" podStartSLOduration=2.116625029 podStartE2EDuration="2.439286607s" podCreationTimestamp="2026-02-02 14:47:57 +0000 UTC" firstStartedPulling="2026-02-02 14:47:58.326237502 +0000 UTC m=+879.970874262" lastFinishedPulling="2026-02-02 14:47:58.64889907 +0000 UTC m=+880.293535840" observedRunningTime="2026-02-02 14:47:59.431536806 +0000 UTC m=+881.076173576" watchObservedRunningTime="2026-02-02 14:47:59.439286607 +0000 UTC m=+881.083923377" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.450392 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.459348 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.478844 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" path="/var/lib/kubelet/pods/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1/volumes" Feb 02 14:48:07 crc kubenswrapper[4869]: I0202 14:48:07.871956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:48:07 crc kubenswrapper[4869]: I0202 14:48:07.872455 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:48:07 crc kubenswrapper[4869]: I0202 14:48:07.904687 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:48:08 crc kubenswrapper[4869]: I0202 14:48:08.493617 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.819984 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn"] Feb 02 14:48:16 crc kubenswrapper[4869]: E0202 14:48:16.821044 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerName="registry-server" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.821061 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerName="registry-server" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.821193 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerName="registry-server" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.822314 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.825788 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-28g5k" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.837624 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn"] Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.909185 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.909287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.909338 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.010966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.011314 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.011864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.012053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.012057 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.041475 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.140314 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.610936 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn"] Feb 02 14:48:17 crc kubenswrapper[4869]: W0202 14:48:17.621244 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode74d3905_6954_4c65_9cd2_d44a638ef83f.slice/crio-e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e WatchSource:0}: Error finding container e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e: Status 404 returned error can't find the container with id e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e Feb 02 14:48:18 crc kubenswrapper[4869]: I0202 14:48:18.538931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerStarted","Data":"e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e"} Feb 02 14:48:19 crc kubenswrapper[4869]: I0202 14:48:19.566321 4869 generic.go:334] "Generic (PLEG): container finished" podID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerID="ea2139921f41fa3e67ecddf9456cf45518c101b96748c442670311f452886063" exitCode=0 Feb 02 14:48:19 crc kubenswrapper[4869]: I0202 14:48:19.566449 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerDied","Data":"ea2139921f41fa3e67ecddf9456cf45518c101b96748c442670311f452886063"} Feb 02 14:48:22 crc kubenswrapper[4869]: I0202 14:48:22.590057 4869 generic.go:334] "Generic (PLEG): container finished" podID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerID="c2f9b38b8211f1db1256483feb7abaa9a5e851d481d7ab79d536571be73a4836" exitCode=0 Feb 02 14:48:22 crc kubenswrapper[4869]: I0202 14:48:22.590188 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerDied","Data":"c2f9b38b8211f1db1256483feb7abaa9a5e851d481d7ab79d536571be73a4836"} Feb 02 14:48:23 crc kubenswrapper[4869]: I0202 14:48:23.610294 4869 generic.go:334] "Generic (PLEG): container finished" podID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerID="bc04c443a7d5bfbfa579e22c993f6e1206879fa1bd3a48122d921a7fb485305c" exitCode=0 Feb 02 14:48:23 crc kubenswrapper[4869]: I0202 14:48:23.610360 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerDied","Data":"bc04c443a7d5bfbfa579e22c993f6e1206879fa1bd3a48122d921a7fb485305c"} Feb 02 14:48:24 crc kubenswrapper[4869]: I0202 14:48:24.891798 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.045731 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") pod \"e74d3905-6954-4c65-9cd2-d44a638ef83f\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.045827 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") pod \"e74d3905-6954-4c65-9cd2-d44a638ef83f\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.046882 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle" (OuterVolumeSpecName: "bundle") pod "e74d3905-6954-4c65-9cd2-d44a638ef83f" (UID: "e74d3905-6954-4c65-9cd2-d44a638ef83f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.047010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") pod \"e74d3905-6954-4c65-9cd2-d44a638ef83f\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.047447 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.060225 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2" (OuterVolumeSpecName: "kube-api-access-n6jf2") pod "e74d3905-6954-4c65-9cd2-d44a638ef83f" (UID: "e74d3905-6954-4c65-9cd2-d44a638ef83f"). InnerVolumeSpecName "kube-api-access-n6jf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.061513 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util" (OuterVolumeSpecName: "util") pod "e74d3905-6954-4c65-9cd2-d44a638ef83f" (UID: "e74d3905-6954-4c65-9cd2-d44a638ef83f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.149659 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") on node \"crc\" DevicePath \"\"" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.149711 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") on node \"crc\" DevicePath \"\"" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.626408 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerDied","Data":"e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e"} Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.626476 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.626527 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.764698 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz"] Feb 02 14:48:28 crc kubenswrapper[4869]: E0202 14:48:28.765664 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="extract" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.765689 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="extract" Feb 02 14:48:28 crc kubenswrapper[4869]: E0202 14:48:28.765708 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="pull" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.765717 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="pull" Feb 02 14:48:28 crc kubenswrapper[4869]: E0202 14:48:28.765739 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="util" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.765749 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="util" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.765896 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="extract" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.766556 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.769288 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sck9p" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.809353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxpd\" (UniqueName: \"kubernetes.io/projected/61702985-b65f-4603-9960-3a455bf05c9e-kube-api-access-crxpd\") pod \"openstack-operator-controller-init-5d75b9d66c-jsstz\" (UID: \"61702985-b65f-4603-9960-3a455bf05c9e\") " pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.810994 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz"] Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.910192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crxpd\" (UniqueName: \"kubernetes.io/projected/61702985-b65f-4603-9960-3a455bf05c9e-kube-api-access-crxpd\") pod \"openstack-operator-controller-init-5d75b9d66c-jsstz\" (UID: \"61702985-b65f-4603-9960-3a455bf05c9e\") " pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.935784 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crxpd\" (UniqueName: \"kubernetes.io/projected/61702985-b65f-4603-9960-3a455bf05c9e-kube-api-access-crxpd\") pod \"openstack-operator-controller-init-5d75b9d66c-jsstz\" (UID: \"61702985-b65f-4603-9960-3a455bf05c9e\") " pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:29 crc kubenswrapper[4869]: I0202 14:48:29.091522 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:29 crc kubenswrapper[4869]: I0202 14:48:29.564821 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz"] Feb 02 14:48:29 crc kubenswrapper[4869]: W0202 14:48:29.579325 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61702985_b65f_4603_9960_3a455bf05c9e.slice/crio-4ea14a04c110727b458ff37019a86f3d5a4313c3d96f5116fc154d500be5d947 WatchSource:0}: Error finding container 4ea14a04c110727b458ff37019a86f3d5a4313c3d96f5116fc154d500be5d947: Status 404 returned error can't find the container with id 4ea14a04c110727b458ff37019a86f3d5a4313c3d96f5116fc154d500be5d947 Feb 02 14:48:29 crc kubenswrapper[4869]: I0202 14:48:29.654707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" event={"ID":"61702985-b65f-4603-9960-3a455bf05c9e","Type":"ContainerStarted","Data":"4ea14a04c110727b458ff37019a86f3d5a4313c3d96f5116fc154d500be5d947"} Feb 02 14:48:39 crc kubenswrapper[4869]: I0202 14:48:39.759416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" event={"ID":"61702985-b65f-4603-9960-3a455bf05c9e","Type":"ContainerStarted","Data":"49f24e968bce5445f5d8ed8f6f8ecda6263188dd37d57f4f253324e55685c4a5"} Feb 02 14:48:39 crc kubenswrapper[4869]: I0202 14:48:39.760484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:39 crc kubenswrapper[4869]: I0202 14:48:39.797114 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" podStartSLOduration=2.161908066 podStartE2EDuration="11.797042723s" podCreationTimestamp="2026-02-02 14:48:28 +0000 UTC" firstStartedPulling="2026-02-02 14:48:29.582111716 +0000 UTC m=+911.226748486" lastFinishedPulling="2026-02-02 14:48:39.217246353 +0000 UTC m=+920.861883143" observedRunningTime="2026-02-02 14:48:39.791587578 +0000 UTC m=+921.436224358" watchObservedRunningTime="2026-02-02 14:48:39.797042723 +0000 UTC m=+921.441679513" Feb 02 14:48:45 crc kubenswrapper[4869]: I0202 14:48:45.304772 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:48:45 crc kubenswrapper[4869]: I0202 14:48:45.307047 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:48:49 crc kubenswrapper[4869]: I0202 14:48:49.095941 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.851002 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.875748 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.893451 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.913634 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.913812 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.913887 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.015618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.016114 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.016247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.016367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.016769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.053608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.254449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.628931 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.862068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerStarted","Data":"500de773517075dede69276293fe3c80940ab88ef8e12edf6ec9251a25ac25db"} Feb 02 14:48:53 crc kubenswrapper[4869]: I0202 14:48:53.876191 4869 generic.go:334] "Generic (PLEG): container finished" podID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerID="8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92" exitCode=0 Feb 02 14:48:53 crc kubenswrapper[4869]: I0202 14:48:53.876267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerDied","Data":"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92"} Feb 02 14:48:54 crc kubenswrapper[4869]: I0202 14:48:54.888992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerStarted","Data":"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d"} Feb 02 14:48:55 crc kubenswrapper[4869]: I0202 14:48:55.900280 4869 generic.go:334] "Generic (PLEG): container finished" podID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerID="8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d" exitCode=0 Feb 02 14:48:55 crc kubenswrapper[4869]: I0202 14:48:55.900345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerDied","Data":"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d"} Feb 02 14:48:56 crc kubenswrapper[4869]: I0202 14:48:56.910684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerStarted","Data":"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6"} Feb 02 14:48:56 crc kubenswrapper[4869]: I0202 14:48:56.938106 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rgslv" podStartSLOduration=3.465744115 podStartE2EDuration="5.938077257s" podCreationTimestamp="2026-02-02 14:48:51 +0000 UTC" firstStartedPulling="2026-02-02 14:48:53.879056586 +0000 UTC m=+935.523693356" lastFinishedPulling="2026-02-02 14:48:56.351389728 +0000 UTC m=+937.996026498" observedRunningTime="2026-02-02 14:48:56.933134894 +0000 UTC m=+938.577771684" watchObservedRunningTime="2026-02-02 14:48:56.938077257 +0000 UTC m=+938.582714027" Feb 02 14:49:02 crc kubenswrapper[4869]: I0202 14:49:02.255285 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:02 crc kubenswrapper[4869]: I0202 14:49:02.256351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:02 crc kubenswrapper[4869]: I0202 14:49:02.350153 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:03 crc kubenswrapper[4869]: I0202 14:49:03.065270 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:03 crc kubenswrapper[4869]: I0202 14:49:03.206972 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:49:04 crc kubenswrapper[4869]: I0202 14:49:04.964939 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rgslv" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="registry-server" containerID="cri-o://b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" gracePeriod=2 Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.475242 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.646794 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") pod \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.647053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") pod \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.647169 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") pod \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.647732 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities" (OuterVolumeSpecName: "utilities") pod "2fd4143f-0316-463b-ae6e-1dc41ade5f61" (UID: "2fd4143f-0316-463b-ae6e-1dc41ade5f61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.654649 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl" (OuterVolumeSpecName: "kube-api-access-g9jrl") pod "2fd4143f-0316-463b-ae6e-1dc41ade5f61" (UID: "2fd4143f-0316-463b-ae6e-1dc41ade5f61"). InnerVolumeSpecName "kube-api-access-g9jrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.713186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fd4143f-0316-463b-ae6e-1dc41ade5f61" (UID: "2fd4143f-0316-463b-ae6e-1dc41ade5f61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.748450 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.748494 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.748508 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.975735 4869 generic.go:334] "Generic (PLEG): container finished" podID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerID="b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" exitCode=0 Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.975794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerDied","Data":"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6"} Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.975857 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerDied","Data":"500de773517075dede69276293fe3c80940ab88ef8e12edf6ec9251a25ac25db"} Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.975879 4869 scope.go:117] "RemoveContainer" containerID="b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.976050 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.998141 4869 scope.go:117] "RemoveContainer" containerID="8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.010456 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.019542 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.028294 4869 scope.go:117] "RemoveContainer" containerID="8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.056044 4869 scope.go:117] "RemoveContainer" containerID="b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" Feb 02 14:49:06 crc kubenswrapper[4869]: E0202 14:49:06.056784 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6\": container with ID starting with b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6 not found: ID does not exist" containerID="b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.056866 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6"} err="failed to get container status \"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6\": rpc error: code = NotFound desc = could not find container \"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6\": container with ID starting with b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6 not found: ID does not exist" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.056937 4869 scope.go:117] "RemoveContainer" containerID="8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d" Feb 02 14:49:06 crc kubenswrapper[4869]: E0202 14:49:06.057512 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d\": container with ID starting with 8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d not found: ID does not exist" containerID="8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.057563 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d"} err="failed to get container status \"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d\": rpc error: code = NotFound desc = could not find container \"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d\": container with ID starting with 8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d not found: ID does not exist" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.057598 4869 scope.go:117] "RemoveContainer" containerID="8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92" Feb 02 14:49:06 crc kubenswrapper[4869]: E0202 14:49:06.061558 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92\": container with ID starting with 8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92 not found: ID does not exist" containerID="8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.061612 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92"} err="failed to get container status \"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92\": rpc error: code = NotFound desc = could not find container \"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92\": container with ID starting with 8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92 not found: ID does not exist" Feb 02 14:49:07 crc kubenswrapper[4869]: I0202 14:49:07.472451 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" path="/var/lib/kubelet/pods/2fd4143f-0316-463b-ae6e-1dc41ade5f61/volumes" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.588752 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:12 crc kubenswrapper[4869]: E0202 14:49:12.590875 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="registry-server" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.590995 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="registry-server" Feb 02 14:49:12 crc kubenswrapper[4869]: E0202 14:49:12.591094 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="extract-utilities" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.591153 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="extract-utilities" Feb 02 14:49:12 crc kubenswrapper[4869]: E0202 14:49:12.591235 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="extract-content" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.591314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="extract-content" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.591527 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="registry-server" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.592618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.605572 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.762437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.762491 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.762517 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.863842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.863890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.863934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.864543 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.864633 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.888649 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.913463 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:13 crc kubenswrapper[4869]: I0202 14:49:13.499219 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.046093 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerID="eda72bcc55c95d316258cf868924e75f80c68e4d577ed22a50a3cec2426c387b" exitCode=0 Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.046198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerDied","Data":"eda72bcc55c95d316258cf868924e75f80c68e4d577ed22a50a3cec2426c387b"} Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.046625 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerStarted","Data":"34a6135c6d9cce7c37dc455df3519275e3b6866fffb9f04458808c6fea6ccae2"} Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.048561 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.719173 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.720389 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.722625 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cbtzv" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.738117 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.742346 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.743330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.753355 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cqqn8" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.783685 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.795041 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.796522 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.800683 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-htrjw" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.823899 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.825092 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.834613 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-646rv" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.857818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.863390 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.864702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.869770 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-58ccw" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.891392 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.898844 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.899209 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.900106 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4qr\" (UniqueName: \"kubernetes.io/projected/f605f0c6-e023-433b-8e78-373b32387809-kube-api-access-7m4qr\") pod \"barbican-operator-controller-manager-fc589b45f-28mqn\" (UID: \"f605f0c6-e023-433b-8e78-373b32387809\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.900154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvlh\" (UniqueName: \"kubernetes.io/projected/fc6638c4-5467-48c9-b725-284cd08372f6-kube-api-access-nwvlh\") pod \"cinder-operator-controller-manager-85899c864d-4cnfc\" (UID: \"fc6638c4-5467-48c9-b725-284cd08372f6\") " pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.900174 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8xqx\" (UniqueName: \"kubernetes.io/projected/f07dc950-121d-4a91-8489-dfc187196775-kube-api-access-l8xqx\") pod \"glance-operator-controller-manager-5d77f4dbc9-qmt77\" (UID: \"f07dc950-121d-4a91-8489-dfc187196775\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.900210 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66khg\" (UniqueName: \"kubernetes.io/projected/5ea40597-21e0-4548-ab09-e381dac894ef-kube-api-access-66khg\") pod \"designate-operator-controller-manager-8f4c5cb64-pbxmj\" (UID: \"5ea40597-21e0-4548-ab09-e381dac894ef\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.905889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pnpct" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.914679 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.922704 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.010952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m4qr\" (UniqueName: \"kubernetes.io/projected/f605f0c6-e023-433b-8e78-373b32387809-kube-api-access-7m4qr\") pod \"barbican-operator-controller-manager-fc589b45f-28mqn\" (UID: \"f605f0c6-e023-433b-8e78-373b32387809\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2sm\" (UniqueName: \"kubernetes.io/projected/53467de5-c9d7-4aa0-973d-180c8cb84b27-kube-api-access-xg2sm\") pod \"heat-operator-controller-manager-65dc6c8d9c-9ph7x\" (UID: \"53467de5-c9d7-4aa0-973d-180c8cb84b27\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011031 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmmd\" (UniqueName: \"kubernetes.io/projected/ad8b0f9a-67d7-4897-af4b-f344b3d1c502-kube-api-access-pcmmd\") pod \"horizon-operator-controller-manager-5fb775575f-cpjjt\" (UID: \"ad8b0f9a-67d7-4897-af4b-f344b3d1c502\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwvlh\" (UniqueName: \"kubernetes.io/projected/fc6638c4-5467-48c9-b725-284cd08372f6-kube-api-access-nwvlh\") pod \"cinder-operator-controller-manager-85899c864d-4cnfc\" (UID: \"fc6638c4-5467-48c9-b725-284cd08372f6\") " pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011082 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8xqx\" (UniqueName: \"kubernetes.io/projected/f07dc950-121d-4a91-8489-dfc187196775-kube-api-access-l8xqx\") pod \"glance-operator-controller-manager-5d77f4dbc9-qmt77\" (UID: \"f07dc950-121d-4a91-8489-dfc187196775\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011116 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66khg\" (UniqueName: \"kubernetes.io/projected/5ea40597-21e0-4548-ab09-e381dac894ef-kube-api-access-66khg\") pod \"designate-operator-controller-manager-8f4c5cb64-pbxmj\" (UID: \"5ea40597-21e0-4548-ab09-e381dac894ef\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.054420 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8xqx\" (UniqueName: \"kubernetes.io/projected/f07dc950-121d-4a91-8489-dfc187196775-kube-api-access-l8xqx\") pod \"glance-operator-controller-manager-5d77f4dbc9-qmt77\" (UID: \"f07dc950-121d-4a91-8489-dfc187196775\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.076754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66khg\" (UniqueName: \"kubernetes.io/projected/5ea40597-21e0-4548-ab09-e381dac894ef-kube-api-access-66khg\") pod \"designate-operator-controller-manager-8f4c5cb64-pbxmj\" (UID: \"5ea40597-21e0-4548-ab09-e381dac894ef\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.079568 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.080779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.082124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerStarted","Data":"292a8800f1074a89c8517ba7b2c39a8724252f08e7b9ac9c8fe944e9593cab13"} Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.084018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwvlh\" (UniqueName: \"kubernetes.io/projected/fc6638c4-5467-48c9-b725-284cd08372f6-kube-api-access-nwvlh\") pod \"cinder-operator-controller-manager-85899c864d-4cnfc\" (UID: \"fc6638c4-5467-48c9-b725-284cd08372f6\") " pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.084278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.086800 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m4qr\" (UniqueName: \"kubernetes.io/projected/f605f0c6-e023-433b-8e78-373b32387809-kube-api-access-7m4qr\") pod \"barbican-operator-controller-manager-fc589b45f-28mqn\" (UID: \"f605f0c6-e023-433b-8e78-373b32387809\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.102555 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.104774 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.107485 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-46pbm" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.108568 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jcwn9" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2sm\" (UniqueName: \"kubernetes.io/projected/53467de5-c9d7-4aa0-973d-180c8cb84b27-kube-api-access-xg2sm\") pod \"heat-operator-controller-manager-65dc6c8d9c-9ph7x\" (UID: \"53467de5-c9d7-4aa0-973d-180c8cb84b27\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119270 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmmd\" (UniqueName: \"kubernetes.io/projected/ad8b0f9a-67d7-4897-af4b-f344b3d1c502-kube-api-access-pcmmd\") pod \"horizon-operator-controller-manager-5fb775575f-cpjjt\" (UID: \"ad8b0f9a-67d7-4897-af4b-f344b3d1c502\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz42l\" (UniqueName: \"kubernetes.io/projected/77902d6e-ef76-42b0-a40c-0b51f383f580-kube-api-access-nz42l\") pod \"ironic-operator-controller-manager-87bd9d46f-762xj\" (UID: \"77902d6e-ef76-42b0-a40c-0b51f383f580\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119449 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpmsc\" (UniqueName: \"kubernetes.io/projected/c0779518-9e33-43e3-b373-263d74fbbd0f-kube-api-access-vpmsc\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.131100 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.132727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.136053 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hmzpm" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.146603 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2sm\" (UniqueName: \"kubernetes.io/projected/53467de5-c9d7-4aa0-973d-180c8cb84b27-kube-api-access-xg2sm\") pod \"heat-operator-controller-manager-65dc6c8d9c-9ph7x\" (UID: \"53467de5-c9d7-4aa0-973d-180c8cb84b27\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.159879 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.167045 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.175060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.175757 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmmd\" (UniqueName: \"kubernetes.io/projected/ad8b0f9a-67d7-4897-af4b-f344b3d1c502-kube-api-access-pcmmd\") pod \"horizon-operator-controller-manager-5fb775575f-cpjjt\" (UID: \"ad8b0f9a-67d7-4897-af4b-f344b3d1c502\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.184521 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.185701 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.190357 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.193946 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.195497 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.201920 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gll2h" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.202613 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.204544 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tdm6w" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.214049 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8tj4\" (UniqueName: \"kubernetes.io/projected/f27a3d01-fbc5-46d9-9c11-ef6c21ead605-kube-api-access-m8tj4\") pod \"keystone-operator-controller-manager-64469b487f-m9czv\" (UID: \"f27a3d01-fbc5-46d9-9c11-ef6c21ead605\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xsn\" (UniqueName: \"kubernetes.io/projected/993dae41-359f-47f7-9a2a-38f7c97d49de-kube-api-access-m8xsn\") pod \"manila-operator-controller-manager-7775d87d9d-l2b72\" (UID: \"993dae41-359f-47f7-9a2a-38f7c97d49de\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.224514 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.224592 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:15.724565701 +0000 UTC m=+957.369202471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwfs\" (UniqueName: \"kubernetes.io/projected/3b0cf904-7af8-4e57-a664-7e594e557445-kube-api-access-7xwfs\") pod \"mariadb-operator-controller-manager-67bf948998-hpnsb\" (UID: \"3b0cf904-7af8-4e57-a664-7e594e557445\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz42l\" (UniqueName: \"kubernetes.io/projected/77902d6e-ef76-42b0-a40c-0b51f383f580-kube-api-access-nz42l\") pod \"ironic-operator-controller-manager-87bd9d46f-762xj\" (UID: \"77902d6e-ef76-42b0-a40c-0b51f383f580\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmsc\" (UniqueName: \"kubernetes.io/projected/c0779518-9e33-43e3-b373-263d74fbbd0f-kube-api-access-vpmsc\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.247245 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-swhqr"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.248276 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.261559 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r7q9n" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.275369 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.287891 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.289398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpmsc\" (UniqueName: \"kubernetes.io/projected/c0779518-9e33-43e3-b373-263d74fbbd0f-kube-api-access-vpmsc\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.289606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.290242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz42l\" (UniqueName: \"kubernetes.io/projected/77902d6e-ef76-42b0-a40c-0b51f383f580-kube-api-access-nz42l\") pod \"ironic-operator-controller-manager-87bd9d46f-762xj\" (UID: \"77902d6e-ef76-42b0-a40c-0b51f383f580\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.294558 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.300191 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2jrdb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.307152 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.307229 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.330059 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8tj4\" (UniqueName: \"kubernetes.io/projected/f27a3d01-fbc5-46d9-9c11-ef6c21ead605-kube-api-access-m8tj4\") pod \"keystone-operator-controller-manager-64469b487f-m9czv\" (UID: \"f27a3d01-fbc5-46d9-9c11-ef6c21ead605\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.385170 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-swhqr"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.385549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xsn\" (UniqueName: \"kubernetes.io/projected/993dae41-359f-47f7-9a2a-38f7c97d49de-kube-api-access-m8xsn\") pod \"manila-operator-controller-manager-7775d87d9d-l2b72\" (UID: \"993dae41-359f-47f7-9a2a-38f7c97d49de\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.385680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xwfs\" (UniqueName: \"kubernetes.io/projected/3b0cf904-7af8-4e57-a664-7e594e557445-kube-api-access-7xwfs\") pod \"mariadb-operator-controller-manager-67bf948998-hpnsb\" (UID: \"3b0cf904-7af8-4e57-a664-7e594e557445\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.389207 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.390075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.394467 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.430240 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8tj4\" (UniqueName: \"kubernetes.io/projected/f27a3d01-fbc5-46d9-9c11-ef6c21ead605-kube-api-access-m8tj4\") pod \"keystone-operator-controller-manager-64469b487f-m9czv\" (UID: \"f27a3d01-fbc5-46d9-9c11-ef6c21ead605\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.460101 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-2chmz"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.461883 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xsn\" (UniqueName: \"kubernetes.io/projected/993dae41-359f-47f7-9a2a-38f7c97d49de-kube-api-access-m8xsn\") pod \"manila-operator-controller-manager-7775d87d9d-l2b72\" (UID: \"993dae41-359f-47f7-9a2a-38f7c97d49de\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.462191 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.476734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xwfs\" (UniqueName: \"kubernetes.io/projected/3b0cf904-7af8-4e57-a664-7e594e557445-kube-api-access-7xwfs\") pod \"mariadb-operator-controller-manager-67bf948998-hpnsb\" (UID: \"3b0cf904-7af8-4e57-a664-7e594e557445\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.484573 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fgdqw" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.493156 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkctg\" (UniqueName: \"kubernetes.io/projected/7e9b35b2-f20d-4102-b541-63d2822c215d-kube-api-access-rkctg\") pod \"octavia-operator-controller-manager-7b89ddb58-h2kl2\" (UID: \"7e9b35b2-f20d-4102-b541-63d2822c215d\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.493293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8j79\" (UniqueName: \"kubernetes.io/projected/98a25bb6-75b1-49ad-8d7c-cc4e763470ec-kube-api-access-j8j79\") pod \"nova-operator-controller-manager-5644b66645-2chmz\" (UID: \"98a25bb6-75b1-49ad-8d7c-cc4e763470ec\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.493328 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5fj\" (UniqueName: \"kubernetes.io/projected/c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb-kube-api-access-wj5fj\") pod \"neutron-operator-controller-manager-576995988b-swhqr\" (UID: \"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.511812 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.515441 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-2chmz"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.525565 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.539866 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.552811 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.554235 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.557555 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xvmqq" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.563817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.582209 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.583482 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.588149 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.589063 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.594533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkctg\" (UniqueName: \"kubernetes.io/projected/7e9b35b2-f20d-4102-b541-63d2822c215d-kube-api-access-rkctg\") pod \"octavia-operator-controller-manager-7b89ddb58-h2kl2\" (UID: \"7e9b35b2-f20d-4102-b541-63d2822c215d\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.594640 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8j79\" (UniqueName: \"kubernetes.io/projected/98a25bb6-75b1-49ad-8d7c-cc4e763470ec-kube-api-access-j8j79\") pod \"nova-operator-controller-manager-5644b66645-2chmz\" (UID: \"98a25bb6-75b1-49ad-8d7c-cc4e763470ec\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.594676 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5fj\" (UniqueName: \"kubernetes.io/projected/c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb-kube-api-access-wj5fj\") pod \"neutron-operator-controller-manager-576995988b-swhqr\" (UID: \"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.595679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kggvj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.630037 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.631733 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.647404 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-hgpvb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.675758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8j79\" (UniqueName: \"kubernetes.io/projected/98a25bb6-75b1-49ad-8d7c-cc4e763470ec-kube-api-access-j8j79\") pod \"nova-operator-controller-manager-5644b66645-2chmz\" (UID: \"98a25bb6-75b1-49ad-8d7c-cc4e763470ec\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.690047 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.692754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5fj\" (UniqueName: \"kubernetes.io/projected/c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb-kube-api-access-wj5fj\") pod \"neutron-operator-controller-manager-576995988b-swhqr\" (UID: \"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.702427 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdv9\" (UniqueName: \"kubernetes.io/projected/bd94e783-b3ec-4d7e-b669-98255f029da6-kube-api-access-qhdv9\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.702523 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxl8\" (UniqueName: \"kubernetes.io/projected/ac2b0707-5906-40df-9457-06739f19df84-kube-api-access-mfxl8\") pod \"placement-operator-controller-manager-5b964cf4cd-6vnjh\" (UID: \"ac2b0707-5906-40df-9457-06739f19df84\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.702675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.702704 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jn2g\" (UniqueName: \"kubernetes.io/projected/cf357940-5e8d-4111-86e6-1fafd5e670cd-kube-api-access-7jn2g\") pod \"ovn-operator-controller-manager-788c46999f-28zx5\" (UID: \"cf357940-5e8d-4111-86e6-1fafd5e670cd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.703724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkctg\" (UniqueName: \"kubernetes.io/projected/7e9b35b2-f20d-4102-b541-63d2822c215d-kube-api-access-rkctg\") pod \"octavia-operator-controller-manager-7b89ddb58-h2kl2\" (UID: \"7e9b35b2-f20d-4102-b541-63d2822c215d\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.726906 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.737754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.742614 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-pvkqz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.745900 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.778164 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.787648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jn2g\" (UniqueName: \"kubernetes.io/projected/cf357940-5e8d-4111-86e6-1fafd5e670cd-kube-api-access-7jn2g\") pod \"ovn-operator-controller-manager-788c46999f-28zx5\" (UID: \"cf357940-5e8d-4111-86e6-1fafd5e670cd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdv9\" (UniqueName: \"kubernetes.io/projected/bd94e783-b3ec-4d7e-b669-98255f029da6-kube-api-access-qhdv9\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxl8\" (UniqueName: \"kubernetes.io/projected/ac2b0707-5906-40df-9457-06739f19df84-kube-api-access-mfxl8\") pod \"placement-operator-controller-manager-5b964cf4cd-6vnjh\" (UID: \"ac2b0707-5906-40df-9457-06739f19df84\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.810005 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.810064 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:16.310045131 +0000 UTC m=+957.954681901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.810451 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.810476 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:16.810467582 +0000 UTC m=+958.455104352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.820978 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.845281 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.854422 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sfb8j" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.887536 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.902939 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.910556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bw9p\" (UniqueName: \"kubernetes.io/projected/98a357a8-0e70-4f30-a41a-8dde25612a8a-kube-api-access-9bw9p\") pod \"swift-operator-controller-manager-7b89fdf75b-zdwh8\" (UID: \"98a357a8-0e70-4f30-a41a-8dde25612a8a\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.910637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc7b2\" (UniqueName: \"kubernetes.io/projected/7af79025-a32d-4e73-9559-5991093e986a-kube-api-access-kc7b2\") pod \"telemetry-operator-controller-manager-565849b54-fm2kj\" (UID: \"7af79025-a32d-4e73-9559-5991093e986a\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.911450 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.924265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdv9\" (UniqueName: \"kubernetes.io/projected/bd94e783-b3ec-4d7e-b669-98255f029da6-kube-api-access-qhdv9\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.926549 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jn2g\" (UniqueName: \"kubernetes.io/projected/cf357940-5e8d-4111-86e6-1fafd5e670cd-kube-api-access-7jn2g\") pod \"ovn-operator-controller-manager-788c46999f-28zx5\" (UID: \"cf357940-5e8d-4111-86e6-1fafd5e670cd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.932854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxl8\" (UniqueName: \"kubernetes.io/projected/ac2b0707-5906-40df-9457-06739f19df84-kube-api-access-mfxl8\") pod \"placement-operator-controller-manager-5b964cf4cd-6vnjh\" (UID: \"ac2b0707-5906-40df-9457-06739f19df84\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.933003 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.934412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.944279 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.951803 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vn67c" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.955775 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.970333 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2gbsl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.970553 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.977147 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.990021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.990728 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.011860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc7b2\" (UniqueName: \"kubernetes.io/projected/7af79025-a32d-4e73-9559-5991093e986a-kube-api-access-kc7b2\") pod \"telemetry-operator-controller-manager-565849b54-fm2kj\" (UID: \"7af79025-a32d-4e73-9559-5991093e986a\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.012067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bw9p\" (UniqueName: \"kubernetes.io/projected/98a357a8-0e70-4f30-a41a-8dde25612a8a-kube-api-access-9bw9p\") pod \"swift-operator-controller-manager-7b89fdf75b-zdwh8\" (UID: \"98a357a8-0e70-4f30-a41a-8dde25612a8a\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.024010 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.069648 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bw9p\" (UniqueName: \"kubernetes.io/projected/98a357a8-0e70-4f30-a41a-8dde25612a8a-kube-api-access-9bw9p\") pod \"swift-operator-controller-manager-7b89fdf75b-zdwh8\" (UID: \"98a357a8-0e70-4f30-a41a-8dde25612a8a\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.075755 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.079235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc7b2\" (UniqueName: \"kubernetes.io/projected/7af79025-a32d-4e73-9559-5991093e986a-kube-api-access-kc7b2\") pod \"telemetry-operator-controller-manager-565849b54-fm2kj\" (UID: \"7af79025-a32d-4e73-9559-5991093e986a\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.128267 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerID="292a8800f1074a89c8517ba7b2c39a8724252f08e7b9ac9c8fe944e9593cab13" exitCode=0 Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.128327 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerDied","Data":"292a8800f1074a89c8517ba7b2c39a8724252f08e7b9ac9c8fe944e9593cab13"} Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.128824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd44g\" (UniqueName: \"kubernetes.io/projected/2dfa14d3-9496-44cb-948b-e4065a9930c8-kube-api-access-zd44g\") pod \"watcher-operator-controller-manager-586b95b788-9fsf5\" (UID: \"2dfa14d3-9496-44cb-948b-e4065a9930c8\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.129062 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zwlj\" (UniqueName: \"kubernetes.io/projected/06f5e083-c0ea-4ad0-9a07-50707d84be61-kube-api-access-5zwlj\") pod \"test-operator-controller-manager-56f8bfcd9f-ntthk\" (UID: \"06f5e083-c0ea-4ad0-9a07-50707d84be61\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.144352 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.145980 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.152616 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.152812 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.155512 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-649np" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.179604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.208740 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.210146 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.227640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d5tx6" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.298643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zwlj\" (UniqueName: \"kubernetes.io/projected/06f5e083-c0ea-4ad0-9a07-50707d84be61-kube-api-access-5zwlj\") pod \"test-operator-controller-manager-56f8bfcd9f-ntthk\" (UID: \"06f5e083-c0ea-4ad0-9a07-50707d84be61\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.298722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59rtr\" (UniqueName: \"kubernetes.io/projected/6719d674-1dac-4af1-859b-ea6a2186a20a-kube-api-access-59rtr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-djzsw\" (UID: \"6719d674-1dac-4af1-859b-ea6a2186a20a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.298794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.299021 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.299109 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd44g\" (UniqueName: \"kubernetes.io/projected/2dfa14d3-9496-44cb-948b-e4065a9930c8-kube-api-access-zd44g\") pod \"watcher-operator-controller-manager-586b95b788-9fsf5\" (UID: \"2dfa14d3-9496-44cb-948b-e4065a9930c8\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.299148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz4sh\" (UniqueName: \"kubernetes.io/projected/32aa6b38-d480-426c-a36c-4cf34c082e73-kube-api-access-vz4sh\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.356976 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.384340 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zwlj\" (UniqueName: \"kubernetes.io/projected/06f5e083-c0ea-4ad0-9a07-50707d84be61-kube-api-access-5zwlj\") pod \"test-operator-controller-manager-56f8bfcd9f-ntthk\" (UID: \"06f5e083-c0ea-4ad0-9a07-50707d84be61\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.386710 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd44g\" (UniqueName: \"kubernetes.io/projected/2dfa14d3-9496-44cb-948b-e4065a9930c8-kube-api-access-zd44g\") pod \"watcher-operator-controller-manager-586b95b788-9fsf5\" (UID: \"2dfa14d3-9496-44cb-948b-e4065a9930c8\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.390259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz4sh\" (UniqueName: \"kubernetes.io/projected/32aa6b38-d480-426c-a36c-4cf34c082e73-kube-api-access-vz4sh\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59rtr\" (UniqueName: \"kubernetes.io/projected/6719d674-1dac-4af1-859b-ea6a2186a20a-kube-api-access-59rtr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-djzsw\" (UID: \"6719d674-1dac-4af1-859b-ea6a2186a20a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.410842 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.410954 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:17.410928734 +0000 UTC m=+959.055565504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.411600 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.411646 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:16.911633151 +0000 UTC m=+958.556269911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.412151 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.412209 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:16.912198735 +0000 UTC m=+958.556835505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.422397 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.435812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.452256 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59rtr\" (UniqueName: \"kubernetes.io/projected/6719d674-1dac-4af1-859b-ea6a2186a20a-kube-api-access-59rtr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-djzsw\" (UID: \"6719d674-1dac-4af1-859b-ea6a2186a20a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.468235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz4sh\" (UniqueName: \"kubernetes.io/projected/32aa6b38-d480-426c-a36c-4cf34c082e73-kube-api-access-vz4sh\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.491764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.712841 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.830384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.830721 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.830817 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:18.830774899 +0000 UTC m=+960.475411669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: W0202 14:49:16.840563 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf07dc950_121d_4a91_8489_dfc187196775.slice/crio-c5b6c2c0ab2a193be81f56cce2ac2d8686711474e89ec3b452596e7e59e52e22 WatchSource:0}: Error finding container c5b6c2c0ab2a193be81f56cce2ac2d8686711474e89ec3b452596e7e59e52e22: Status 404 returned error can't find the container with id c5b6c2c0ab2a193be81f56cce2ac2d8686711474e89ec3b452596e7e59e52e22 Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.947297 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.947434 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.947751 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.947833 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:17.947809589 +0000 UTC m=+959.592446359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.948426 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.948470 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:17.948459495 +0000 UTC m=+959.593096265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.988841 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.010884 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.067490 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.088298 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc"] Feb 02 14:49:17 crc kubenswrapper[4869]: W0202 14:49:17.157696 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53467de5_c9d7_4aa0_973d_180c8cb84b27.slice/crio-eded51121e2b9783991b25ec7ed189b4ed23d44b3ece0f69f817d2f14f092c37 WatchSource:0}: Error finding container eded51121e2b9783991b25ec7ed189b4ed23d44b3ece0f69f817d2f14f092c37: Status 404 returned error can't find the container with id eded51121e2b9783991b25ec7ed189b4ed23d44b3ece0f69f817d2f14f092c37 Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.157931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" event={"ID":"5ea40597-21e0-4548-ab09-e381dac894ef","Type":"ContainerStarted","Data":"66c6fd837dcd71931e3097318cf979cba422c0b7036eacce4cb44efeabc22bc3"} Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.161576 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" event={"ID":"f07dc950-121d-4a91-8489-dfc187196775","Type":"ContainerStarted","Data":"c5b6c2c0ab2a193be81f56cce2ac2d8686711474e89ec3b452596e7e59e52e22"} Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.167590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" event={"ID":"ad8b0f9a-67d7-4897-af4b-f344b3d1c502","Type":"ContainerStarted","Data":"9ec46e395679c23eeb9c8f74127a0244184326a8559ec2b1db534251ce0c0846"} Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.169569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" event={"ID":"fc6638c4-5467-48c9-b725-284cd08372f6","Type":"ContainerStarted","Data":"3c3b47259b7c0fc9966a57a2b37172aec96795f374100a8b07641e0b88e85a16"} Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.459731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.460407 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.460474 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:19.460454243 +0000 UTC m=+961.105091013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.523073 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.533690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.550241 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb"] Feb 02 14:49:17 crc kubenswrapper[4869]: W0202 14:49:17.570988 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b0cf904_7af8_4e57_a664_7e594e557445.slice/crio-8f1f6328a62edc63fb63c15d2a966bc49cd12e0fd0e67626215053b5e8305f99 WatchSource:0}: Error finding container 8f1f6328a62edc63fb63c15d2a966bc49cd12e0fd0e67626215053b5e8305f99: Status 404 returned error can't find the container with id 8f1f6328a62edc63fb63c15d2a966bc49cd12e0fd0e67626215053b5e8305f99 Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.776827 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.812386 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.870495 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.970933 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.971073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.971319 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.971403 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:19.971373795 +0000 UTC m=+961.616010565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.971467 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.971493 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:19.971484618 +0000 UTC m=+961.616121388 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.013282 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.033425 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.045690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.074369 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.074641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.074815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.076603 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-2chmz"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.177678 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.177774 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.177860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.182598 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.183557 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.212596 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" event={"ID":"53467de5-c9d7-4aa0-973d-180c8cb84b27","Type":"ContainerStarted","Data":"eded51121e2b9783991b25ec7ed189b4ed23d44b3ece0f69f817d2f14f092c37"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.226276 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.228616 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" event={"ID":"f27a3d01-fbc5-46d9-9c11-ef6c21ead605","Type":"ContainerStarted","Data":"0ebd7b98b948904756d3563f45b1c8df7ec70ea597dc9a010bc530676e6f73a6"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.241011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerStarted","Data":"ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.249084 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" event={"ID":"77902d6e-ef76-42b0-a40c-0b51f383f580","Type":"ContainerStarted","Data":"00ea06048ddc8667830932e41773107435e41ca5403583340fe6f4b0ba9e7248"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.291319 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.307810 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.312703 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cgj22" podStartSLOduration=3.487441797 podStartE2EDuration="6.312673244s" podCreationTimestamp="2026-02-02 14:49:12 +0000 UTC" firstStartedPulling="2026-02-02 14:49:14.048198338 +0000 UTC m=+955.692835108" lastFinishedPulling="2026-02-02 14:49:16.873429785 +0000 UTC m=+958.518066555" observedRunningTime="2026-02-02 14:49:18.280720682 +0000 UTC m=+959.925357452" watchObservedRunningTime="2026-02-02 14:49:18.312673244 +0000 UTC m=+959.957310014" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.332307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" event={"ID":"98a357a8-0e70-4f30-a41a-8dde25612a8a","Type":"ContainerStarted","Data":"07ac21f91f40125a213a6bd8f6b22e8cc4accd5f96868be2a5f0564d14e942e9"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.338621 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.340257 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" event={"ID":"f605f0c6-e023-433b-8e78-373b32387809","Type":"ContainerStarted","Data":"ac8117740631684f2b607a6456bc5d0ae94ea118c1bf1ebc98c98c2571998033"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.347824 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5"] Feb 02 14:49:18 crc kubenswrapper[4869]: W0202 14:49:18.349079 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af79025_a32d_4e73_9559_5991093e986a.slice/crio-9ac6741e8253fe00ef5c5537ae260dee0b72449d5d49db322b94f71abd6c6ced WatchSource:0}: Error finding container 9ac6741e8253fe00ef5c5537ae260dee0b72449d5d49db322b94f71abd6c6ced: Status 404 returned error can't find the container with id 9ac6741e8253fe00ef5c5537ae260dee0b72449d5d49db322b94f71abd6c6ced Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.363359 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh"] Feb 02 14:49:18 crc kubenswrapper[4869]: W0202 14:49:18.371050 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf357940_5e8d_4111_86e6_1fafd5e670cd.slice/crio-926e15b739c6f03ebe2b4e4dc35188306c5223bc2e03a3e6f0c7ffb2aaef088e WatchSource:0}: Error finding container 926e15b739c6f03ebe2b4e4dc35188306c5223bc2e03a3e6f0c7ffb2aaef088e: Status 404 returned error can't find the container with id 926e15b739c6f03ebe2b4e4dc35188306c5223bc2e03a3e6f0c7ffb2aaef088e Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.370259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-swhqr"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.371759 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" event={"ID":"98a25bb6-75b1-49ad-8d7c-cc4e763470ec","Type":"ContainerStarted","Data":"64a89e976ccd1c3efced28ada4285b1efdcbdd3a1c28ca634a1b93949bda31ef"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.376117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" event={"ID":"3b0cf904-7af8-4e57-a664-7e594e557445","Type":"ContainerStarted","Data":"8f1f6328a62edc63fb63c15d2a966bc49cd12e0fd0e67626215053b5e8305f99"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.408486 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5"] Feb 02 14:49:18 crc kubenswrapper[4869]: W0202 14:49:18.408526 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06f5e083_c0ea_4ad0_9a07_50707d84be61.slice/crio-e7e8f26ce23a730172aff1faf8e6aa0a150fa76b34f4d81f2f2a8857bb0e9c1a WatchSource:0}: Error finding container e7e8f26ce23a730172aff1faf8e6aa0a150fa76b34f4d81f2f2a8857bb0e9c1a: Status 404 returned error can't find the container with id e7e8f26ce23a730172aff1faf8e6aa0a150fa76b34f4d81f2f2a8857bb0e9c1a Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.410612 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.443931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" event={"ID":"993dae41-359f-47f7-9a2a-38f7c97d49de","Type":"ContainerStarted","Data":"92f2d4cc86f3d1a27e46b54ba4f6d0191c419271b083d99ade0721689e9a6ffa"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.458306 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw"] Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.486431 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zd44g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-586b95b788-9fsf5_openstack-operators(2dfa14d3-9496-44cb-948b-e4065a9930c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.487735 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" podUID="2dfa14d3-9496-44cb-948b-e4065a9930c8" Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.495162 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59rtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-djzsw_openstack-operators(6719d674-1dac-4af1-859b-ea6a2186a20a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.496879 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.931261 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.931936 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.932058 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:22.932022393 +0000 UTC m=+964.576659343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.253641 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:49:19 crc kubenswrapper[4869]: W0202 14:49:19.344144 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8bef13a_7759_4c87_be0b_09017f74f36e.slice/crio-b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a WatchSource:0}: Error finding container b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a: Status 404 returned error can't find the container with id b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.536801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" event={"ID":"cf357940-5e8d-4111-86e6-1fafd5e670cd","Type":"ContainerStarted","Data":"926e15b739c6f03ebe2b4e4dc35188306c5223bc2e03a3e6f0c7ffb2aaef088e"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.538727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" event={"ID":"2dfa14d3-9496-44cb-948b-e4065a9930c8","Type":"ContainerStarted","Data":"770a9320d96169d0bbb22a9377187377241d576110e2a54baf61ea71b02dfce8"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.560885 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:19 crc kubenswrapper[4869]: E0202 14:49:19.562745 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:19 crc kubenswrapper[4869]: E0202 14:49:19.562805 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:23.562785376 +0000 UTC m=+965.207422146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:19 crc kubenswrapper[4869]: E0202 14:49:19.587651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" podUID="2dfa14d3-9496-44cb-948b-e4065a9930c8" Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.588435 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerStarted","Data":"b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.614554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" event={"ID":"6719d674-1dac-4af1-859b-ea6a2186a20a","Type":"ContainerStarted","Data":"a2218b87a7b0fae5af909cb8be6f92dbe6e298bd3eb6f3252f40f1912552acea"} Feb 02 14:49:19 crc kubenswrapper[4869]: E0202 14:49:19.621850 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.625747 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" event={"ID":"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb","Type":"ContainerStarted","Data":"7eb1becba457956f29745fa0781faa4b802729fe13f354544b25af7864351dcc"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.649122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" event={"ID":"06f5e083-c0ea-4ad0-9a07-50707d84be61","Type":"ContainerStarted","Data":"e7e8f26ce23a730172aff1faf8e6aa0a150fa76b34f4d81f2f2a8857bb0e9c1a"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.695376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" event={"ID":"ac2b0707-5906-40df-9457-06739f19df84","Type":"ContainerStarted","Data":"88c434aad9ad58199752e96590ad12e2c6b934f4898a7cc0f7e46791b942e5e3"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.705537 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" event={"ID":"7af79025-a32d-4e73-9559-5991093e986a","Type":"ContainerStarted","Data":"9ac6741e8253fe00ef5c5537ae260dee0b72449d5d49db322b94f71abd6c6ced"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.717603 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" event={"ID":"7e9b35b2-f20d-4102-b541-63d2822c215d","Type":"ContainerStarted","Data":"3a13c4491e87656cc0b11ffcec9957dc38d9e5630a640ace1b6c38b86044ae20"} Feb 02 14:49:20 crc kubenswrapper[4869]: I0202 14:49:20.083235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:20 crc kubenswrapper[4869]: I0202 14:49:20.083895 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.084300 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.084382 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:24.084358172 +0000 UTC m=+965.728994942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.085482 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.085542 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:24.085528611 +0000 UTC m=+965.730165381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:20 crc kubenswrapper[4869]: I0202 14:49:20.744179 4869 generic.go:334] "Generic (PLEG): container finished" podID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerID="5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f" exitCode=0 Feb 02 14:49:20 crc kubenswrapper[4869]: I0202 14:49:20.746155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerDied","Data":"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f"} Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.748545 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" podUID="2dfa14d3-9496-44cb-948b-e4065a9930c8" Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.748623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:49:22 crc kubenswrapper[4869]: I0202 14:49:22.914217 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:22 crc kubenswrapper[4869]: I0202 14:49:22.924454 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:22 crc kubenswrapper[4869]: I0202 14:49:22.984287 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:22 crc kubenswrapper[4869]: E0202 14:49:22.984551 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:22 crc kubenswrapper[4869]: E0202 14:49:22.984674 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:30.98464165 +0000 UTC m=+972.629278600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:23 crc kubenswrapper[4869]: I0202 14:49:23.033299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:23 crc kubenswrapper[4869]: I0202 14:49:23.604897 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:23 crc kubenswrapper[4869]: E0202 14:49:23.605213 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:23 crc kubenswrapper[4869]: E0202 14:49:23.605307 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:31.605275661 +0000 UTC m=+973.249912431 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:23 crc kubenswrapper[4869]: I0202 14:49:23.866192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:24 crc kubenswrapper[4869]: I0202 14:49:24.116290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:24 crc kubenswrapper[4869]: I0202 14:49:24.116435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:24 crc kubenswrapper[4869]: E0202 14:49:24.116598 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:24 crc kubenswrapper[4869]: E0202 14:49:24.116735 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:32.116704005 +0000 UTC m=+973.761340965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:24 crc kubenswrapper[4869]: E0202 14:49:24.116626 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:24 crc kubenswrapper[4869]: E0202 14:49:24.116812 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:32.116790127 +0000 UTC m=+973.761426897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:25 crc kubenswrapper[4869]: I0202 14:49:25.153672 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:26 crc kubenswrapper[4869]: I0202 14:49:26.832471 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cgj22" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" containerID="cri-o://ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" gracePeriod=2 Feb 02 14:49:27 crc kubenswrapper[4869]: I0202 14:49:27.841500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerStarted","Data":"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b"} Feb 02 14:49:27 crc kubenswrapper[4869]: I0202 14:49:27.845784 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" exitCode=0 Feb 02 14:49:27 crc kubenswrapper[4869]: I0202 14:49:27.845870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerDied","Data":"ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091"} Feb 02 14:49:28 crc kubenswrapper[4869]: I0202 14:49:28.856418 4869 generic.go:334] "Generic (PLEG): container finished" podID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerID="22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b" exitCode=0 Feb 02 14:49:28 crc kubenswrapper[4869]: I0202 14:49:28.856483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerDied","Data":"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b"} Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.050479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.067635 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.092678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-46pbm" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.101038 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.660480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.666626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.818981 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xvmqq" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.827706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:32 crc kubenswrapper[4869]: I0202 14:49:32.179514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:32 crc kubenswrapper[4869]: I0202 14:49:32.179624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.179766 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.179822 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.179889 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:48.179868719 +0000 UTC m=+989.824505489 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.179923 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:48.17989913 +0000 UTC m=+989.824535900 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.914870 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.915734 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.916361 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.916407 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cgj22" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:49:33 crc kubenswrapper[4869]: E0202 14:49:33.463820 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:9fa80e6901c5db08f3ed7bece144698223b0b60d2309a2b509b0a23dd07042d9" Feb 02 14:49:33 crc kubenswrapper[4869]: E0202 14:49:33.464090 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:9fa80e6901c5db08f3ed7bece144698223b0b60d2309a2b509b0a23dd07042d9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nz42l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-87bd9d46f-762xj_openstack-operators(77902d6e-ef76-42b0-a40c-0b51f383f580): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:33 crc kubenswrapper[4869]: E0202 14:49:33.465332 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" podUID="77902d6e-ef76-42b0-a40c-0b51f383f580" Feb 02 14:49:33 crc kubenswrapper[4869]: E0202 14:49:33.895378 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:9fa80e6901c5db08f3ed7bece144698223b0b60d2309a2b509b0a23dd07042d9\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" podUID="77902d6e-ef76-42b0-a40c-0b51f383f580" Feb 02 14:49:34 crc kubenswrapper[4869]: E0202 14:49:34.165224 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:be0d0110cb736cbaaf0508da2a961913ca822bbaf5592ae8f23812570d9c2120" Feb 02 14:49:34 crc kubenswrapper[4869]: E0202 14:49:34.165530 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:be0d0110cb736cbaaf0508da2a961913ca822bbaf5592ae8f23812570d9c2120,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8xsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7775d87d9d-l2b72_openstack-operators(993dae41-359f-47f7-9a2a-38f7c97d49de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:34 crc kubenswrapper[4869]: E0202 14:49:34.166830 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" podUID="993dae41-359f-47f7-9a2a-38f7c97d49de" Feb 02 14:49:34 crc kubenswrapper[4869]: E0202 14:49:34.901885 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:be0d0110cb736cbaaf0508da2a961913ca822bbaf5592ae8f23812570d9c2120\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" podUID="993dae41-359f-47f7-9a2a-38f7c97d49de" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.036707 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.037024 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xwfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-hpnsb_openstack-operators(3b0cf904-7af8-4e57-a664-7e594e557445): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.038294 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" podUID="3b0cf904-7af8-4e57-a664-7e594e557445" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.740495 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:0d329ab746aa36e748f3d236599b186dc9787c63630f91bc2975d7e784d837be" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.740792 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:0d329ab746aa36e748f3d236599b186dc9787c63630f91bc2975d7e784d837be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66khg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-8f4c5cb64-pbxmj_openstack-operators(5ea40597-21e0-4548-ab09-e381dac894ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.742068 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" podUID="5ea40597-21e0-4548-ab09-e381dac894ef" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.909497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:0d329ab746aa36e748f3d236599b186dc9787c63630f91bc2975d7e784d837be\\\"\"" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" podUID="5ea40597-21e0-4548-ab09-e381dac894ef" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.911437 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" podUID="3b0cf904-7af8-4e57-a664-7e594e557445" Feb 02 14:49:36 crc kubenswrapper[4869]: E0202 14:49:36.454111 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Feb 02 14:49:36 crc kubenswrapper[4869]: E0202 14:49:36.454855 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mfxl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-6vnjh_openstack-operators(ac2b0707-5906-40df-9457-06739f19df84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:36 crc kubenswrapper[4869]: E0202 14:49:36.456178 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" podUID="ac2b0707-5906-40df-9457-06739f19df84" Feb 02 14:49:36 crc kubenswrapper[4869]: E0202 14:49:36.921088 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" podUID="ac2b0707-5906-40df-9457-06739f19df84" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.266059 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:674639c6f9130078d6b5e4bace30435325651c82f3090681562c9cf6655b9576" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.266331 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:674639c6f9130078d6b5e4bace30435325651c82f3090681562c9cf6655b9576,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7b2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-565849b54-fm2kj_openstack-operators(7af79025-a32d-4e73-9559-5991093e986a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.268256 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" podUID="7af79025-a32d-4e73-9559-5991093e986a" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.926925 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:674639c6f9130078d6b5e4bace30435325651c82f3090681562c9cf6655b9576\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" podUID="7af79025-a32d-4e73-9559-5991093e986a" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.950111 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.950359 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bw9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7b89fdf75b-zdwh8_openstack-operators(98a357a8-0e70-4f30-a41a-8dde25612a8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.951622 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" podUID="98a357a8-0e70-4f30-a41a-8dde25612a8a" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.695332 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.695958 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zwlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-ntthk_openstack-operators(06f5e083-c0ea-4ad0-9a07-50707d84be61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.697507 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" podUID="06f5e083-c0ea-4ad0-9a07-50707d84be61" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.935953 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" podUID="06f5e083-c0ea-4ad0-9a07-50707d84be61" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.936443 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" podUID="98a357a8-0e70-4f30-a41a-8dde25612a8a" Feb 02 14:49:39 crc kubenswrapper[4869]: E0202 14:49:39.422853 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:840e391b9a51241176705a421996a17a1433878433ce8720d4ed1a4b69319ccd" Feb 02 14:49:39 crc kubenswrapper[4869]: E0202 14:49:39.423201 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:840e391b9a51241176705a421996a17a1433878433ce8720d4ed1a4b69319ccd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7m4qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-fc589b45f-28mqn_openstack-operators(f605f0c6-e023-433b-8e78-373b32387809): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:39 crc kubenswrapper[4869]: E0202 14:49:39.424474 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" podUID="f605f0c6-e023-433b-8e78-373b32387809" Feb 02 14:49:39 crc kubenswrapper[4869]: E0202 14:49:39.942220 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:840e391b9a51241176705a421996a17a1433878433ce8720d4ed1a4b69319ccd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" podUID="f605f0c6-e023-433b-8e78-373b32387809" Feb 02 14:49:41 crc kubenswrapper[4869]: E0202 14:49:41.632669 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:3b23ff94b16ca60ae67e31a0f4e85af246c7f16dd03ed8ab6f33f81b3a3a8aa8" Feb 02 14:49:41 crc kubenswrapper[4869]: E0202 14:49:41.633375 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:3b23ff94b16ca60ae67e31a0f4e85af246c7f16dd03ed8ab6f33f81b3a3a8aa8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8xqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5d77f4dbc9-qmt77_openstack-operators(f07dc950-121d-4a91-8489-dfc187196775): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:41 crc kubenswrapper[4869]: E0202 14:49:41.635139 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" podUID="f07dc950-121d-4a91-8489-dfc187196775" Feb 02 14:49:41 crc kubenswrapper[4869]: E0202 14:49:41.953628 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:3b23ff94b16ca60ae67e31a0f4e85af246c7f16dd03ed8ab6f33f81b3a3a8aa8\\\"\"" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" podUID="f07dc950-121d-4a91-8489-dfc187196775" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.270508 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.270759 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jn2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-28zx5_openstack-operators(cf357940-5e8d-4111-86e6-1fafd5e670cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.272233 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" podUID="cf357940-5e8d-4111-86e6-1fafd5e670cd" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.915627 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.916378 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.916821 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.916862 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cgj22" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.981962 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" podUID="cf357940-5e8d-4111-86e6-1fafd5e670cd" Feb 02 14:49:43 crc kubenswrapper[4869]: E0202 14:49:43.757171 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9" Feb 02 14:49:43 crc kubenswrapper[4869]: E0202 14:49:43.758071 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkctg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7b89ddb58-h2kl2_openstack-operators(7e9b35b2-f20d-4102-b541-63d2822c215d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:43 crc kubenswrapper[4869]: E0202 14:49:43.759529 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" podUID="7e9b35b2-f20d-4102-b541-63d2822c215d" Feb 02 14:49:43 crc kubenswrapper[4869]: E0202 14:49:43.987501 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" podUID="7e9b35b2-f20d-4102-b541-63d2822c215d" Feb 02 14:49:44 crc kubenswrapper[4869]: E0202 14:49:44.526955 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f" Feb 02 14:49:44 crc kubenswrapper[4869]: E0202 14:49:44.527723 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8tj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-64469b487f-m9czv_openstack-operators(f27a3d01-fbc5-46d9-9c11-ef6c21ead605): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:44 crc kubenswrapper[4869]: E0202 14:49:44.528954 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" podUID="f27a3d01-fbc5-46d9-9c11-ef6c21ead605" Feb 02 14:49:44 crc kubenswrapper[4869]: E0202 14:49:44.993618 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" podUID="f27a3d01-fbc5-46d9-9c11-ef6c21ead605" Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.304072 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.304141 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.304195 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.305065 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.305137 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56" gracePeriod=600 Feb 02 14:49:46 crc kubenswrapper[4869]: I0202 14:49:46.000971 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56" exitCode=0 Feb 02 14:49:46 crc kubenswrapper[4869]: I0202 14:49:46.001039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56"} Feb 02 14:49:46 crc kubenswrapper[4869]: I0202 14:49:46.001106 4869 scope.go:117] "RemoveContainer" containerID="e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.275888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.276423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.283282 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.283483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.576195 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-649np" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.583805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:52 crc kubenswrapper[4869]: E0202 14:49:52.915386 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:52 crc kubenswrapper[4869]: E0202 14:49:52.916849 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:52 crc kubenswrapper[4869]: E0202 14:49:52.917154 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:52 crc kubenswrapper[4869]: E0202 14:49:52.917181 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cgj22" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.314516 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.349686 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.350033 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8j79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5644b66645-2chmz_openstack-operators(98a25bb6-75b1-49ad-8d7c-cc4e763470ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.377261 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" podUID="98a25bb6-75b1-49ad-8d7c-cc4e763470ec" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.482477 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") pod \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.482615 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") pod \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.482741 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") pod \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.484320 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities" (OuterVolumeSpecName: "utilities") pod "ff654c3f-299a-4ca0-b9b0-ecd963f680c9" (UID: "ff654c3f-299a-4ca0-b9b0-ecd963f680c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.489762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr" (OuterVolumeSpecName: "kube-api-access-bc2nr") pod "ff654c3f-299a-4ca0-b9b0-ecd963f680c9" (UID: "ff654c3f-299a-4ca0-b9b0-ecd963f680c9"). InnerVolumeSpecName "kube-api-access-bc2nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.508335 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff654c3f-299a-4ca0-b9b0-ecd963f680c9" (UID: "ff654c3f-299a-4ca0-b9b0-ecd963f680c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.585422 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.585464 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.585474 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.853901 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.854165 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59rtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-djzsw_openstack-operators(6719d674-1dac-4af1-859b-ea6a2186a20a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.855375 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.071131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerDied","Data":"34a6135c6d9cce7c37dc455df3519275e3b6866fffb9f04458808c6fea6ccae2"} Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.071237 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:54 crc kubenswrapper[4869]: E0202 14:49:54.089485 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" podUID="98a25bb6-75b1-49ad-8d7c-cc4e763470ec" Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.132686 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.139849 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.274567 4869 scope.go:117] "RemoveContainer" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.528857 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj"] Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.587874 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl"] Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.828717 4869 scope.go:117] "RemoveContainer" containerID="292a8800f1074a89c8517ba7b2c39a8724252f08e7b9ac9c8fe944e9593cab13" Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.898235 4869 scope.go:117] "RemoveContainer" containerID="eda72bcc55c95d316258cf868924e75f80c68e4d577ed22a50a3cec2426c387b" Feb 02 14:49:55 crc kubenswrapper[4869]: I0202 14:49:55.111267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" event={"ID":"bd94e783-b3ec-4d7e-b669-98255f029da6","Type":"ContainerStarted","Data":"06f524340ca6f7602aa48458621e7c6091d0cf2fa45c25aee91a0ae804a14a5c"} Feb 02 14:49:55 crc kubenswrapper[4869]: I0202 14:49:55.127285 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" event={"ID":"c0779518-9e33-43e3-b373-263d74fbbd0f","Type":"ContainerStarted","Data":"de215708ac9df5c372c8284f222ad9800dbe2a2e9010105836019917220bc997"} Feb 02 14:49:55 crc kubenswrapper[4869]: I0202 14:49:55.355378 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb"] Feb 02 14:49:55 crc kubenswrapper[4869]: W0202 14:49:55.425624 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32aa6b38_d480_426c_a36c_4cf34c082e73.slice/crio-ea542c8cc320513ee44c3365b48091d8934e8ba065471b82e0de2380b1d9d42d WatchSource:0}: Error finding container ea542c8cc320513ee44c3365b48091d8934e8ba065471b82e0de2380b1d9d42d: Status 404 returned error can't find the container with id ea542c8cc320513ee44c3365b48091d8934e8ba065471b82e0de2380b1d9d42d Feb 02 14:49:55 crc kubenswrapper[4869]: I0202 14:49:55.479307 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" path="/var/lib/kubelet/pods/ff654c3f-299a-4ca0-b9b0-ecd963f680c9/volumes" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.159523 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" event={"ID":"ac2b0707-5906-40df-9457-06739f19df84","Type":"ContainerStarted","Data":"24a82b94e8a8fac36c907f81426cc483ab799fa2ad64b0536a54a3e4030f8ad2"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.161291 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.172124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerStarted","Data":"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.200250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" event={"ID":"7af79025-a32d-4e73-9559-5991093e986a","Type":"ContainerStarted","Data":"38fab07f2a2003158ac96ea51832181cdb6a9619fc4e382bca67532616d594e0"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.200694 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.217193 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" podStartSLOduration=4.832949263 podStartE2EDuration="41.21716133s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.485599659 +0000 UTC m=+960.130236429" lastFinishedPulling="2026-02-02 14:49:54.869811726 +0000 UTC m=+996.514448496" observedRunningTime="2026-02-02 14:49:56.2114972 +0000 UTC m=+997.856133980" watchObservedRunningTime="2026-02-02 14:49:56.21716133 +0000 UTC m=+997.861798090" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.232487 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" event={"ID":"5ea40597-21e0-4548-ab09-e381dac894ef","Type":"ContainerStarted","Data":"72d46cf7e3cadf0e98acca38a77ae82eb13fd8479f0ded3ef72b99ad2ec9339f"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.232858 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.249666 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" event={"ID":"3b0cf904-7af8-4e57-a664-7e594e557445","Type":"ContainerStarted","Data":"56282af3e2af06a979f62d650cdf0f65404b47825edc532462ccca46466b9917"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.251852 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.252825 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mk6t7" podStartSLOduration=6.171161966 podStartE2EDuration="39.252799172s" podCreationTimestamp="2026-02-02 14:49:17 +0000 UTC" firstStartedPulling="2026-02-02 14:49:20.750013259 +0000 UTC m=+962.394650029" lastFinishedPulling="2026-02-02 14:49:53.831650475 +0000 UTC m=+995.476287235" observedRunningTime="2026-02-02 14:49:56.240283562 +0000 UTC m=+997.884920332" watchObservedRunningTime="2026-02-02 14:49:56.252799172 +0000 UTC m=+997.897435942" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.279008 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" podStartSLOduration=4.783829074 podStartE2EDuration="41.27898095s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.362215792 +0000 UTC m=+960.006852562" lastFinishedPulling="2026-02-02 14:49:54.857367668 +0000 UTC m=+996.502004438" observedRunningTime="2026-02-02 14:49:56.276540769 +0000 UTC m=+997.921177539" watchObservedRunningTime="2026-02-02 14:49:56.27898095 +0000 UTC m=+997.923617720" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.280447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" event={"ID":"993dae41-359f-47f7-9a2a-38f7c97d49de","Type":"ContainerStarted","Data":"2e3b62b5604f6cc7141dea720abafa7d154d2db3a239a304ba52cb43a0df75a9"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.281147 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.322119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" event={"ID":"f605f0c6-e023-433b-8e78-373b32387809","Type":"ContainerStarted","Data":"dd22013abd5eb7835955913ca084fe8ff662493eb8f3bf76692b608f74a4912d"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.323178 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.361550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" event={"ID":"2dfa14d3-9496-44cb-948b-e4065a9930c8","Type":"ContainerStarted","Data":"5830f84959c97c617ff24abe5b6b4c7213bb98b0e1447fa18abc7da308f5b925"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.362192 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" podStartSLOduration=5.083036055 podStartE2EDuration="42.362166629s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.579882714 +0000 UTC m=+959.224519484" lastFinishedPulling="2026-02-02 14:49:54.859013288 +0000 UTC m=+996.503650058" observedRunningTime="2026-02-02 14:49:56.319127704 +0000 UTC m=+997.963764494" watchObservedRunningTime="2026-02-02 14:49:56.362166629 +0000 UTC m=+998.006803399" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.362709 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.365185 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" podStartSLOduration=4.599640084 podStartE2EDuration="42.365172144s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.105804484 +0000 UTC m=+958.750441264" lastFinishedPulling="2026-02-02 14:49:54.871336554 +0000 UTC m=+996.515973324" observedRunningTime="2026-02-02 14:49:56.363030451 +0000 UTC m=+998.007667251" watchObservedRunningTime="2026-02-02 14:49:56.365172144 +0000 UTC m=+998.009808914" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.369090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" event={"ID":"fc6638c4-5467-48c9-b725-284cd08372f6","Type":"ContainerStarted","Data":"2d0b907418dea9ffc40feceaf23d6e99adcbf632f08050a9b8429112104a314a"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.370016 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.382115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" event={"ID":"32aa6b38-d480-426c-a36c-4cf34c082e73","Type":"ContainerStarted","Data":"731c6e8f7adb05918c425d07d4f80cdea7fc3dc283ecaeb106b342883d620d25"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.382183 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" event={"ID":"32aa6b38-d480-426c-a36c-4cf34c082e73","Type":"ContainerStarted","Data":"ea542c8cc320513ee44c3365b48091d8934e8ba065471b82e0de2380b1d9d42d"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.382211 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.395555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" event={"ID":"f07dc950-121d-4a91-8489-dfc187196775","Type":"ContainerStarted","Data":"b948057143600ebda6a0fc622ad560559639317a9b5839a6e62523574793252b"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.396433 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.406546 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" event={"ID":"98a357a8-0e70-4f30-a41a-8dde25612a8a","Type":"ContainerStarted","Data":"0f2dcdd6d5e247c472d850cc8c16dfc20c8fe707fd699c510c5d617b8216258b"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.407505 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.419093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" event={"ID":"53467de5-c9d7-4aa0-973d-180c8cb84b27","Type":"ContainerStarted","Data":"eda99b8e20106d4f310f7cf46603d8e510a0a9993d0f155d80a4d2b65139eda1"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.420162 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.431236 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.452334 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" podStartSLOduration=5.848471494 podStartE2EDuration="41.452304021s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.486199245 +0000 UTC m=+960.130836015" lastFinishedPulling="2026-02-02 14:49:54.090031772 +0000 UTC m=+995.734668542" observedRunningTime="2026-02-02 14:49:56.432695145 +0000 UTC m=+998.077331915" watchObservedRunningTime="2026-02-02 14:49:56.452304021 +0000 UTC m=+998.096940791" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.462030 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" event={"ID":"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb","Type":"ContainerStarted","Data":"451f33842f029503a039ed91632b0e5da30bafa4937ad999206a0886ef62d501"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.463433 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.487131 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" podStartSLOduration=5.150984036 podStartE2EDuration="42.487109003s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.536077367 +0000 UTC m=+959.180714137" lastFinishedPulling="2026-02-02 14:49:54.872202334 +0000 UTC m=+996.516839104" observedRunningTime="2026-02-02 14:49:56.486370724 +0000 UTC m=+998.131007484" watchObservedRunningTime="2026-02-02 14:49:56.487109003 +0000 UTC m=+998.131745773" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.492672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" event={"ID":"cf357940-5e8d-4111-86e6-1fafd5e670cd","Type":"ContainerStarted","Data":"77c4701e54c8897d490b6c0e01b2ed81d1ece388868aac728c18685da9fafeb7"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.493609 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.524335 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" event={"ID":"77902d6e-ef76-42b0-a40c-0b51f383f580","Type":"ContainerStarted","Data":"6377928fd851051af58fc7bce4f72ee2e99e7bb65a58b9265d903aae7639a192"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.525404 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.544478 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" podStartSLOduration=5.240556453 podStartE2EDuration="42.544452652s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.554852993 +0000 UTC m=+959.199489753" lastFinishedPulling="2026-02-02 14:49:54.858749182 +0000 UTC m=+996.503385952" observedRunningTime="2026-02-02 14:49:56.535694685 +0000 UTC m=+998.180331455" watchObservedRunningTime="2026-02-02 14:49:56.544452652 +0000 UTC m=+998.189089422" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.559057 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" event={"ID":"ad8b0f9a-67d7-4897-af4b-f344b3d1c502","Type":"ContainerStarted","Data":"7aad9305cd0b916f4f4cde15a0ef3b46620277c76519be9112e199231273258a"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.559842 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.574825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" event={"ID":"06f5e083-c0ea-4ad0-9a07-50707d84be61","Type":"ContainerStarted","Data":"a5e28465d91360550647c580503f315794c653372ab882ad7ea02655bf4b7fec"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.575209 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.599511 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" podStartSLOduration=5.189612723 podStartE2EDuration="41.599475844s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.46220814 +0000 UTC m=+960.106844910" lastFinishedPulling="2026-02-02 14:49:54.872071261 +0000 UTC m=+996.516708031" observedRunningTime="2026-02-02 14:49:56.583853377 +0000 UTC m=+998.228490147" watchObservedRunningTime="2026-02-02 14:49:56.599475844 +0000 UTC m=+998.244112614" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.624470 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" podStartSLOduration=5.778576711 podStartE2EDuration="42.624438292s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.161674899 +0000 UTC m=+958.806311669" lastFinishedPulling="2026-02-02 14:49:54.00753648 +0000 UTC m=+995.652173250" observedRunningTime="2026-02-02 14:49:56.619487009 +0000 UTC m=+998.264123789" watchObservedRunningTime="2026-02-02 14:49:56.624438292 +0000 UTC m=+998.269075062" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.659837 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" podStartSLOduration=4.086570938 podStartE2EDuration="42.659812588s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:16.869073968 +0000 UTC m=+958.513710738" lastFinishedPulling="2026-02-02 14:49:55.442315618 +0000 UTC m=+997.086952388" observedRunningTime="2026-02-02 14:49:56.654463805 +0000 UTC m=+998.299100585" watchObservedRunningTime="2026-02-02 14:49:56.659812588 +0000 UTC m=+998.304449358" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.819814 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" podStartSLOduration=6.081796124 podStartE2EDuration="42.819781317s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.093502999 +0000 UTC m=+958.738139769" lastFinishedPulling="2026-02-02 14:49:53.831488192 +0000 UTC m=+995.476124962" observedRunningTime="2026-02-02 14:49:56.788989075 +0000 UTC m=+998.433625865" watchObservedRunningTime="2026-02-02 14:49:56.819781317 +0000 UTC m=+998.464418087" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.902095 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" podStartSLOduration=41.902061215 podStartE2EDuration="41.902061215s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:49:56.896722322 +0000 UTC m=+998.541359092" watchObservedRunningTime="2026-02-02 14:49:56.902061215 +0000 UTC m=+998.546697985" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.969468 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" podStartSLOduration=8.195866678 podStartE2EDuration="42.969440482s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.459282878 +0000 UTC m=+960.103919648" lastFinishedPulling="2026-02-02 14:49:53.232856682 +0000 UTC m=+994.877493452" observedRunningTime="2026-02-02 14:49:56.952646037 +0000 UTC m=+998.597282817" watchObservedRunningTime="2026-02-02 14:49:56.969440482 +0000 UTC m=+998.614077252" Feb 02 14:49:57 crc kubenswrapper[4869]: I0202 14:49:57.015801 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" podStartSLOduration=6.025455602 podStartE2EDuration="43.015773009s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.88079063 +0000 UTC m=+959.525427400" lastFinishedPulling="2026-02-02 14:49:54.871108037 +0000 UTC m=+996.515744807" observedRunningTime="2026-02-02 14:49:57.008827267 +0000 UTC m=+998.653464047" watchObservedRunningTime="2026-02-02 14:49:57.015773009 +0000 UTC m=+998.660409779" Feb 02 14:49:57 crc kubenswrapper[4869]: I0202 14:49:57.071810 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" podStartSLOduration=5.094530723 podStartE2EDuration="42.071770236s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.880601276 +0000 UTC m=+959.525238046" lastFinishedPulling="2026-02-02 14:49:54.857840789 +0000 UTC m=+996.502477559" observedRunningTime="2026-02-02 14:49:57.069848418 +0000 UTC m=+998.714485198" watchObservedRunningTime="2026-02-02 14:49:57.071770236 +0000 UTC m=+998.716407016" Feb 02 14:49:57 crc kubenswrapper[4869]: I0202 14:49:57.117732 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" podStartSLOduration=6.299221456 podStartE2EDuration="43.117706003s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.013019505 +0000 UTC m=+958.657656275" lastFinishedPulling="2026-02-02 14:49:53.831504052 +0000 UTC m=+995.476140822" observedRunningTime="2026-02-02 14:49:57.110064254 +0000 UTC m=+998.754701044" watchObservedRunningTime="2026-02-02 14:49:57.117706003 +0000 UTC m=+998.762342773" Feb 02 14:49:57 crc kubenswrapper[4869]: I0202 14:49:57.165198 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" podStartSLOduration=5.770881062 podStartE2EDuration="42.165168798s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.462367074 +0000 UTC m=+960.107003844" lastFinishedPulling="2026-02-02 14:49:54.85665481 +0000 UTC m=+996.501291580" observedRunningTime="2026-02-02 14:49:57.16447005 +0000 UTC m=+998.809106820" watchObservedRunningTime="2026-02-02 14:49:57.165168798 +0000 UTC m=+998.809805568" Feb 02 14:49:58 crc kubenswrapper[4869]: I0202 14:49:58.411324 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:58 crc kubenswrapper[4869]: I0202 14:49:58.411936 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:59 crc kubenswrapper[4869]: I0202 14:49:59.496164 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mk6t7" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" probeResult="failure" output=< Feb 02 14:49:59 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 14:49:59 crc kubenswrapper[4869]: > Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.629996 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" event={"ID":"7e9b35b2-f20d-4102-b541-63d2822c215d","Type":"ContainerStarted","Data":"9afa6d86470cadb79b93ffcf2d0abb331307f18e0c01e30da96f6d3be9b43e96"} Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.631063 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.633010 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" event={"ID":"c0779518-9e33-43e3-b373-263d74fbbd0f","Type":"ContainerStarted","Data":"0d8c1328ec52e73cdd86bacbcf24b06870f6941bbc722dcc462efc4260f2a7c5"} Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.633184 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.635369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" event={"ID":"bd94e783-b3ec-4d7e-b669-98255f029da6","Type":"ContainerStarted","Data":"2856fa3264e65b50d70e5ceb4a884aa822231c558fbda5aa40cf1b71f4891f80"} Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.635451 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.638544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" event={"ID":"f27a3d01-fbc5-46d9-9c11-ef6c21ead605","Type":"ContainerStarted","Data":"e2c1c344995f3d29f015c12574169aa6cfecda26a5618f318ba2bd092b4506ce"} Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.638776 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.660202 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" podStartSLOduration=4.477848141 podStartE2EDuration="47.660171138s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.461841801 +0000 UTC m=+960.106478571" lastFinishedPulling="2026-02-02 14:50:01.644164798 +0000 UTC m=+1003.288801568" observedRunningTime="2026-02-02 14:50:02.653389941 +0000 UTC m=+1004.298026711" watchObservedRunningTime="2026-02-02 14:50:02.660171138 +0000 UTC m=+1004.304807908" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.678733 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" podStartSLOduration=41.893551309 podStartE2EDuration="48.678708598s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:54.858611329 +0000 UTC m=+996.503248099" lastFinishedPulling="2026-02-02 14:50:01.643768618 +0000 UTC m=+1003.288405388" observedRunningTime="2026-02-02 14:50:02.678079012 +0000 UTC m=+1004.322715782" watchObservedRunningTime="2026-02-02 14:50:02.678708598 +0000 UTC m=+1004.323345368" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.718169 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" podStartSLOduration=40.933572909 podStartE2EDuration="47.718140144s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:54.858451544 +0000 UTC m=+996.503088304" lastFinishedPulling="2026-02-02 14:50:01.643018779 +0000 UTC m=+1003.287655539" observedRunningTime="2026-02-02 14:50:02.713465749 +0000 UTC m=+1004.358102529" watchObservedRunningTime="2026-02-02 14:50:02.718140144 +0000 UTC m=+1004.362776914" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.738418 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" podStartSLOduration=4.973885551 podStartE2EDuration="48.738392645s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.880015822 +0000 UTC m=+959.524652602" lastFinishedPulling="2026-02-02 14:50:01.644522926 +0000 UTC m=+1003.289159696" observedRunningTime="2026-02-02 14:50:02.731595357 +0000 UTC m=+1004.376232127" watchObservedRunningTime="2026-02-02 14:50:02.738392645 +0000 UTC m=+1004.383029415" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.195133 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.214739 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.292209 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.298427 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.394549 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.399054 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.531030 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.568804 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.599341 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.915324 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.993735 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.028425 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.079787 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.361896 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.427813 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.453790 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:50:08 crc kubenswrapper[4869]: E0202 14:50:08.464383 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.484144 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.542020 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.590812 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.733715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" event={"ID":"98a25bb6-75b1-49ad-8d7c-cc4e763470ec","Type":"ContainerStarted","Data":"138c732146319f66b14ff469591dab73126474a5491388391d962553666c79e2"} Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.734893 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.751030 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.780733 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" podStartSLOduration=4.322671253 podStartE2EDuration="53.780703305s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.130590942 +0000 UTC m=+959.775227712" lastFinishedPulling="2026-02-02 14:50:07.588622994 +0000 UTC m=+1009.233259764" observedRunningTime="2026-02-02 14:50:08.774611294 +0000 UTC m=+1010.419248085" watchObservedRunningTime="2026-02-02 14:50:08.780703305 +0000 UTC m=+1010.425340075" Feb 02 14:50:09 crc kubenswrapper[4869]: I0202 14:50:09.740553 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mk6t7" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" containerID="cri-o://4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" gracePeriod=2 Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.153721 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.225642 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") pod \"c8bef13a-7759-4c87-be0b-09017f74f36e\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.226042 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") pod \"c8bef13a-7759-4c87-be0b-09017f74f36e\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.226078 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") pod \"c8bef13a-7759-4c87-be0b-09017f74f36e\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.227159 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities" (OuterVolumeSpecName: "utilities") pod "c8bef13a-7759-4c87-be0b-09017f74f36e" (UID: "c8bef13a-7759-4c87-be0b-09017f74f36e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.232468 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5" (OuterVolumeSpecName: "kube-api-access-22zp5") pod "c8bef13a-7759-4c87-be0b-09017f74f36e" (UID: "c8bef13a-7759-4c87-be0b-09017f74f36e"). InnerVolumeSpecName "kube-api-access-22zp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.282501 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8bef13a-7759-4c87-be0b-09017f74f36e" (UID: "c8bef13a-7759-4c87-be0b-09017f74f36e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.328390 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.328433 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.328446 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") on node \"crc\" DevicePath \"\"" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.749848 4869 generic.go:334] "Generic (PLEG): container finished" podID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerID="4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" exitCode=0 Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.749930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerDied","Data":"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345"} Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.749968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerDied","Data":"b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a"} Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.749994 4869 scope.go:117] "RemoveContainer" containerID="4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.750111 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.787390 4869 scope.go:117] "RemoveContainer" containerID="22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.793832 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.801600 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.811356 4869 scope.go:117] "RemoveContainer" containerID="5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.841365 4869 scope.go:117] "RemoveContainer" containerID="4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" Feb 02 14:50:10 crc kubenswrapper[4869]: E0202 14:50:10.848219 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345\": container with ID starting with 4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345 not found: ID does not exist" containerID="4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.848602 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345"} err="failed to get container status \"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345\": rpc error: code = NotFound desc = could not find container \"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345\": container with ID starting with 4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345 not found: ID does not exist" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.848838 4869 scope.go:117] "RemoveContainer" containerID="22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b" Feb 02 14:50:10 crc kubenswrapper[4869]: E0202 14:50:10.849681 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b\": container with ID starting with 22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b not found: ID does not exist" containerID="22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.849762 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b"} err="failed to get container status \"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b\": rpc error: code = NotFound desc = could not find container \"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b\": container with ID starting with 22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b not found: ID does not exist" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.849810 4869 scope.go:117] "RemoveContainer" containerID="5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f" Feb 02 14:50:10 crc kubenswrapper[4869]: E0202 14:50:10.850535 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f\": container with ID starting with 5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f not found: ID does not exist" containerID="5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.850639 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f"} err="failed to get container status \"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f\": rpc error: code = NotFound desc = could not find container \"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f\": container with ID starting with 5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f not found: ID does not exist" Feb 02 14:50:11 crc kubenswrapper[4869]: I0202 14:50:11.106507 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:50:11 crc kubenswrapper[4869]: I0202 14:50:11.473864 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" path="/var/lib/kubelet/pods/c8bef13a-7759-4c87-be0b-09017f74f36e/volumes" Feb 02 14:50:11 crc kubenswrapper[4869]: I0202 14:50:11.834847 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:50:15 crc kubenswrapper[4869]: I0202 14:50:15.543949 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:50:15 crc kubenswrapper[4869]: I0202 14:50:15.849060 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:50:15 crc kubenswrapper[4869]: I0202 14:50:15.993437 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:50:22 crc kubenswrapper[4869]: I0202 14:50:22.848503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" event={"ID":"6719d674-1dac-4af1-859b-ea6a2186a20a","Type":"ContainerStarted","Data":"f3b2e3dd4df40af0a6a4b4a46f04abd41944c447c6f5fedd7aad5ac45c56f1af"} Feb 02 14:50:22 crc kubenswrapper[4869]: I0202 14:50:22.868797 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podStartSLOduration=3.257716014 podStartE2EDuration="1m6.868772258s" podCreationTimestamp="2026-02-02 14:49:16 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.494977962 +0000 UTC m=+960.139614732" lastFinishedPulling="2026-02-02 14:50:22.106034206 +0000 UTC m=+1023.750670976" observedRunningTime="2026-02-02 14:50:22.868695576 +0000 UTC m=+1024.513332366" watchObservedRunningTime="2026-02-02 14:50:22.868772258 +0000 UTC m=+1024.513409028" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.199933 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202221 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="extract-content" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202253 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="extract-content" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202278 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202374 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202420 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="extract-utilities" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202431 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="extract-utilities" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202442 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202450 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202462 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="extract-content" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202472 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="extract-content" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202486 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="extract-utilities" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202495 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="extract-utilities" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.210538 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.210656 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.212029 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.221642 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.223086 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tzlk5" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.223214 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.220551 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.227351 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.255952 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.262027 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.267613 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.282789 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342833 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342863 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342962 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444667 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444885 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.446725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.446832 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.447643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.475020 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.475068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.547534 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.594693 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:41 crc kubenswrapper[4869]: I0202 14:50:41.090378 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:50:41 crc kubenswrapper[4869]: I0202 14:50:41.157095 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:50:41 crc kubenswrapper[4869]: W0202 14:50:41.159899 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6166bb6a_5dce_4f45_8e72_80a8677451c1.slice/crio-47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64 WatchSource:0}: Error finding container 47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64: Status 404 returned error can't find the container with id 47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64 Feb 02 14:50:41 crc kubenswrapper[4869]: I0202 14:50:41.991741 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" event={"ID":"ffb6a700-f36f-4bad-a670-532f64d03e8d","Type":"ContainerStarted","Data":"40d283a23f15f072a351872ebd571e334c5a19ad9297f4d284e98ceadfa0347a"} Feb 02 14:50:41 crc kubenswrapper[4869]: I0202 14:50:41.992896 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" event={"ID":"6166bb6a-5dce-4f45-8e72-80a8677451c1","Type":"ContainerStarted","Data":"47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64"} Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.121224 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.154938 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.157377 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.191042 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.347373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.347580 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.347756 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.451065 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.451136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.451171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.452749 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.453003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.489082 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.514044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.613799 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.657775 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.660028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.685065 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.757423 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.757539 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.758271 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.861258 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.861317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.861418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.862720 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.863140 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.918693 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.010939 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.229500 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:50:44 crc kubenswrapper[4869]: W0202 14:50:44.237677 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84f2e276_a4a3_4992_aadc_e6e4e259feea.slice/crio-71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a WatchSource:0}: Error finding container 71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a: Status 404 returned error can't find the container with id 71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.454693 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.456620 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462221 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462367 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gjvp4" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462386 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462655 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462769 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462928 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.463022 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.463107 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581159 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581275 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581314 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581620 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581640 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581691 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.582539 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.680847 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684810 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684925 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684948 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684991 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.685014 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.685043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.685059 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.688064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.688876 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.689214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.691218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.691509 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.693089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.693281 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.695607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.699946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.700759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.708978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.723371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.856704 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.857303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.861858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.871520 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.871669 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.875444 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.875881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gtj7h" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.876418 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.876546 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.876678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.884133 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991453 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991579 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991882 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991993 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.992280 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.054195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" event={"ID":"84f2e276-a4a3-4992-aadc-e6e4e259feea","Type":"ContainerStarted","Data":"71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a"} Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.056053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" event={"ID":"8b641090-1ff7-4058-9633-de20ec70c671","Type":"ContainerStarted","Data":"29623a0a20d0d3f426297d37f9c2d0abf87beb1dfbc32ce1bbed40778e70b8b2"} Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.093996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094259 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094374 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094398 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094451 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.096457 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.096612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.097610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.097946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.098400 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.098685 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.099403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.100694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.103627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.104310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.117682 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.152747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.211982 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.546061 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:50:45 crc kubenswrapper[4869]: W0202 14:50:45.559285 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb339c96d_7eb1_4359_bcc3_6853622d5aa6.slice/crio-71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b WatchSource:0}: Error finding container 71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b: Status 404 returned error can't find the container with id 71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.688090 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.712702 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.712964 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.719467 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4zkj9" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.720131 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.727559 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.750762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.761365 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.825090 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826586 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826669 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcft5\" (UniqueName: \"kubernetes.io/projected/0db20771-eb71-4272-9814-ab5bf0fff1fe-kube-api-access-fcft5\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826892 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.891705 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcft5\" (UniqueName: \"kubernetes.io/projected/0db20771-eb71-4272-9814-ab5bf0fff1fe-kube-api-access-fcft5\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928372 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928397 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928873 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.929525 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.931474 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.935642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.938660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.946495 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.948613 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: W0202 14:50:45.965381 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95035071_a194_40ba_9b64_700ae3121dc4.slice/crio-4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965 WatchSource:0}: Error finding container 4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965: Status 404 returned error can't find the container with id 4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965 Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.002808 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.015115 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcft5\" (UniqueName: \"kubernetes.io/projected/0db20771-eb71-4272-9814-ab5bf0fff1fe-kube-api-access-fcft5\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.077290 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.104506 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerStarted","Data":"71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b"} Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.120037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerStarted","Data":"4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965"} Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.796590 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.931317 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.937348 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.948798 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-llsf5" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.949122 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.949349 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.949563 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.960517 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.062186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.062577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.062711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.062858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.063014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.063169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44h8g\" (UniqueName: \"kubernetes.io/projected/4287f1a9-b523-48a9-a999-fc8f34b212a4-kube-api-access-44h8g\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.063303 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.063564 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.065410 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.066685 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.077774 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.080637 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fz6fg" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.081047 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.089331 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165611 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165696 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/1078d20a-9d7e-45ef-8be5-bade239489c4-kube-api-access-7786r\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165784 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165805 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165827 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166095 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-config-data\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44h8g\" (UniqueName: \"kubernetes.io/projected/4287f1a9-b523-48a9-a999-fc8f34b212a4-kube-api-access-44h8g\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166200 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-kolla-config\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.167947 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.168153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.168401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.168732 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.171624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.207803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44h8g\" (UniqueName: \"kubernetes.io/projected/4287f1a9-b523-48a9-a999-fc8f34b212a4-kube-api-access-44h8g\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.207830 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.222089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.226134 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271523 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/1078d20a-9d7e-45ef-8be5-bade239489c4-kube-api-access-7786r\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-config-data\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-kolla-config\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271791 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.273803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-config-data\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.274121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-kolla-config\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.277736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.291899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.296542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/1078d20a-9d7e-45ef-8be5-bade239489c4-kube-api-access-7786r\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.298697 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.410650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.800305 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.858200 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.870642 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-77gm6" Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.905112 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.945395 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") pod \"kube-state-metrics-0\" (UID: \"52d7887e-0487-4179-a0af-6f51b9eed8e7\") " pod="openstack/kube-state-metrics-0" Feb 02 14:50:49 crc kubenswrapper[4869]: I0202 14:50:49.047052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") pod \"kube-state-metrics-0\" (UID: \"52d7887e-0487-4179-a0af-6f51b9eed8e7\") " pod="openstack/kube-state-metrics-0" Feb 02 14:50:49 crc kubenswrapper[4869]: I0202 14:50:49.079996 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") pod \"kube-state-metrics-0\" (UID: \"52d7887e-0487-4179-a0af-6f51b9eed8e7\") " pod="openstack/kube-state-metrics-0" Feb 02 14:50:49 crc kubenswrapper[4869]: I0202 14:50:49.236036 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.519478 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f7z74"] Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.521200 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.531716 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5nxjc" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.531887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.531743 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.532932 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f7z74"] Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.592785 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bd7dt"] Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.599974 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.611150 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bd7dt"] Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629022 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-combined-ca-bundle\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629415 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629532 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-etc-ovs\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d51425d7-d30c-466d-b478-17a637e3ef9f-scripts\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-log\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79eb9544-e5e9-455c-94ca-bb36fa6eb873-scripts\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630108 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95b7\" (UniqueName: \"kubernetes.io/projected/79eb9544-e5e9-455c-94ca-bb36fa6eb873-kube-api-access-c95b7\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630233 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-run\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsqbr\" (UniqueName: \"kubernetes.io/projected/d51425d7-d30c-466d-b478-17a637e3ef9f-kube-api-access-nsqbr\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-ovn-controller-tls-certs\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630608 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-lib\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-log-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732628 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d51425d7-d30c-466d-b478-17a637e3ef9f-scripts\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-log\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79eb9544-e5e9-455c-94ca-bb36fa6eb873-scripts\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95b7\" (UniqueName: \"kubernetes.io/projected/79eb9544-e5e9-455c-94ca-bb36fa6eb873-kube-api-access-c95b7\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732846 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-run\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsqbr\" (UniqueName: \"kubernetes.io/projected/d51425d7-d30c-466d-b478-17a637e3ef9f-kube-api-access-nsqbr\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-ovn-controller-tls-certs\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732968 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-lib\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733042 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-log-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-combined-ca-bundle\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733147 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-etc-ovs\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733856 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-etc-ovs\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735322 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-log-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735424 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-run\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735342 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735471 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-lib\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735504 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-log\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.737761 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79eb9544-e5e9-455c-94ca-bb36fa6eb873-scripts\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.738018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d51425d7-d30c-466d-b478-17a637e3ef9f-scripts\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.743359 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-combined-ca-bundle\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.745587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-ovn-controller-tls-certs\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.756618 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95b7\" (UniqueName: \"kubernetes.io/projected/79eb9544-e5e9-455c-94ca-bb36fa6eb873-kube-api-access-c95b7\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.757515 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsqbr\" (UniqueName: \"kubernetes.io/projected/d51425d7-d30c-466d-b478-17a637e3ef9f-kube-api-access-nsqbr\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.846604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.926713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.239049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0db20771-eb71-4272-9814-ab5bf0fff1fe","Type":"ContainerStarted","Data":"3bd6013ab427605f751d6d5e88cdfa9e6c7d0a76361b78cacc0f93508f5f1596"} Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.363113 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.364558 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.370805 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.371317 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.371621 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.371780 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-kj4w2" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.373485 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.392571 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.446392 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.446588 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.446872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.447020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-config\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.447117 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z9r8\" (UniqueName: \"kubernetes.io/projected/208fe19b-f03b-4a68-b6f2-f9dc3783239e-kube-api-access-8z9r8\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.447160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.449366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.449519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551065 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551152 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551349 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-config\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551382 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z9r8\" (UniqueName: \"kubernetes.io/projected/208fe19b-f03b-4a68-b6f2-f9dc3783239e-kube-api-access-8z9r8\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551901 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.552003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.553140 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-config\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.553367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.557779 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.557836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.570243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.591839 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.603062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z9r8\" (UniqueName: \"kubernetes.io/projected/208fe19b-f03b-4a68-b6f2-f9dc3783239e-kube-api-access-8z9r8\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.711028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.122848 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.125744 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.131891 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hz4lj" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.131939 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.132053 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.131975 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.141631 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.202968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203043 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203071 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v74v\" (UniqueName: \"kubernetes.io/projected/c9a1c388-0473-4284-9a2c-09e3d97858f2-kube-api-access-9v74v\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203127 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203179 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203255 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305439 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305510 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305577 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v74v\" (UniqueName: \"kubernetes.io/projected/c9a1c388-0473-4284-9a2c-09e3d97858f2-kube-api-access-9v74v\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305634 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305678 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305745 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.306302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.306677 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.307113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.307446 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.312810 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.315856 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.325946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v74v\" (UniqueName: \"kubernetes.io/projected/c9a1c388-0473-4284-9a2c-09e3d97858f2-kube-api-access-9v74v\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.326752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.341585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.460713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:03 crc kubenswrapper[4869]: E0202 14:51:03.705840 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 02 14:51:03 crc kubenswrapper[4869]: E0202 14:51:03.707575 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfjdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(b339c96d-7eb1-4359-bcc3-6853622d5aa6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:03 crc kubenswrapper[4869]: E0202 14:51:03.709167 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" Feb 02 14:51:04 crc kubenswrapper[4869]: E0202 14:51:04.341025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" Feb 02 14:51:11 crc kubenswrapper[4869]: I0202 14:51:11.534774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.075546 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.075798 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fd4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-xjhxx_openstack(8b641090-1ff7-4058-9633-de20ec70c671): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.077014 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" podUID="8b641090-1ff7-4058-9633-de20ec70c671" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.124339 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.124516 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h22v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-q69j4_openstack(ffb6a700-f36f-4bad-a670-532f64d03e8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.125662 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" podUID="ffb6a700-f36f-4bad-a670-532f64d03e8d" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.156351 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.156594 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbmfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-k2kfn_openstack(6166bb6a-5dce-4f45-8e72-80a8677451c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.160335 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" podUID="6166bb6a-5dce-4f45-8e72-80a8677451c1" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.206017 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.206882 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scf4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-hlvlp_openstack(84f2e276-a4a3-4992-aadc-e6e4e259feea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.208409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" podUID="84f2e276-a4a3-4992-aadc-e6e4e259feea" Feb 02 14:51:12 crc kubenswrapper[4869]: I0202 14:51:12.433738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1078d20a-9d7e-45ef-8be5-bade239489c4","Type":"ContainerStarted","Data":"0742d987bd520eb5b5410dfa68de7b74a894c31587c7a99077474008abe77c17"} Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.437040 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" podUID="84f2e276-a4a3-4992-aadc-e6e4e259feea" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.437297 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" podUID="8b641090-1ff7-4058-9633-de20ec70c671" Feb 02 14:51:12 crc kubenswrapper[4869]: I0202 14:51:12.638793 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f7z74"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.079097 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.115718 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 14:51:13 crc kubenswrapper[4869]: W0202 14:51:13.123109 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d7887e_0487_4179_a0af_6f51b9eed8e7.slice/crio-be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103 WatchSource:0}: Error finding container be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103: Status 404 returned error can't find the container with id be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103 Feb 02 14:51:13 crc kubenswrapper[4869]: W0202 14:51:13.130573 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4287f1a9_b523_48a9_a999_fc8f34b212a4.slice/crio-1235f102623269e036d7b19ec04050e25397b702ec633308ba14497ff8a8a44f WatchSource:0}: Error finding container 1235f102623269e036d7b19ec04050e25397b702ec633308ba14497ff8a8a44f: Status 404 returned error can't find the container with id 1235f102623269e036d7b19ec04050e25397b702ec633308ba14497ff8a8a44f Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.221574 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.246321 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 14:51:13 crc kubenswrapper[4869]: W0202 14:51:13.278568 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a1c388_0473_4284_9a2c_09e3d97858f2.slice/crio-a695105036b50c8f1c1e36fca961ffa7455e615d2dcaf2df126de3cbe6b0272e WatchSource:0}: Error finding container a695105036b50c8f1c1e36fca961ffa7455e615d2dcaf2df126de3cbe6b0272e: Status 404 returned error can't find the container with id a695105036b50c8f1c1e36fca961ffa7455e615d2dcaf2df126de3cbe6b0272e Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.376743 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.409291 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") pod \"ffb6a700-f36f-4bad-a670-532f64d03e8d\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.409382 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") pod \"ffb6a700-f36f-4bad-a670-532f64d03e8d\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.409927 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config" (OuterVolumeSpecName: "config") pod "ffb6a700-f36f-4bad-a670-532f64d03e8d" (UID: "ffb6a700-f36f-4bad-a670-532f64d03e8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.452161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74" event={"ID":"d51425d7-d30c-466d-b478-17a637e3ef9f","Type":"ContainerStarted","Data":"b8aa905f4aa320d22c75d46051742b044332d353c2bb5cac09622ca7bb44d496"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.455616 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9a1c388-0473-4284-9a2c-09e3d97858f2","Type":"ContainerStarted","Data":"a695105036b50c8f1c1e36fca961ffa7455e615d2dcaf2df126de3cbe6b0272e"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.458379 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0db20771-eb71-4272-9814-ab5bf0fff1fe","Type":"ContainerStarted","Data":"1f043f93bdd75692e3778bb3515619f7b78ac6456cb11303903caa9aa52d1f13"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.460365 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"52d7887e-0487-4179-a0af-6f51b9eed8e7","Type":"ContainerStarted","Data":"be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.461596 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.463867 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.475785 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v" (OuterVolumeSpecName: "kube-api-access-7h22v") pod "ffb6a700-f36f-4bad-a670-532f64d03e8d" (UID: "ffb6a700-f36f-4bad-a670-532f64d03e8d"). InnerVolumeSpecName "kube-api-access-7h22v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.513487 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") pod \"6166bb6a-5dce-4f45-8e72-80a8677451c1\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.513549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") pod \"6166bb6a-5dce-4f45-8e72-80a8677451c1\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.513673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") pod \"6166bb6a-5dce-4f45-8e72-80a8677451c1\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.514313 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6166bb6a-5dce-4f45-8e72-80a8677451c1" (UID: "6166bb6a-5dce-4f45-8e72-80a8677451c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.514539 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.514564 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.514961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config" (OuterVolumeSpecName: "config") pod "6166bb6a-5dce-4f45-8e72-80a8677451c1" (UID: "6166bb6a-5dce-4f45-8e72-80a8677451c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.550112 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" event={"ID":"ffb6a700-f36f-4bad-a670-532f64d03e8d","Type":"ContainerDied","Data":"40d283a23f15f072a351872ebd571e334c5a19ad9297f4d284e98ceadfa0347a"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.550182 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" event={"ID":"6166bb6a-5dce-4f45-8e72-80a8677451c1","Type":"ContainerDied","Data":"47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.550201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4287f1a9-b523-48a9-a999-fc8f34b212a4","Type":"ContainerStarted","Data":"1235f102623269e036d7b19ec04050e25397b702ec633308ba14497ff8a8a44f"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.576628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx" (OuterVolumeSpecName: "kube-api-access-cbmfx") pod "6166bb6a-5dce-4f45-8e72-80a8677451c1" (UID: "6166bb6a-5dce-4f45-8e72-80a8677451c1"). InnerVolumeSpecName "kube-api-access-cbmfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.618829 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.618878 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.618950 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.863921 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.872188 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.887336 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.900362 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:51:13 crc kubenswrapper[4869]: E0202 14:51:13.972305 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6166bb6a_5dce_4f45_8e72_80a8677451c1.slice\": RecentStats: unable to find data in memory cache]" Feb 02 14:51:14 crc kubenswrapper[4869]: I0202 14:51:14.129613 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 14:51:14 crc kubenswrapper[4869]: I0202 14:51:14.267875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bd7dt"] Feb 02 14:51:14 crc kubenswrapper[4869]: I0202 14:51:14.496868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerStarted","Data":"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93"} Feb 02 14:51:14 crc kubenswrapper[4869]: I0202 14:51:14.507016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4287f1a9-b523-48a9-a999-fc8f34b212a4","Type":"ContainerStarted","Data":"afb1cbeab983d6b4b46ae44495de0b332c18b10393223bd85665c1538577edab"} Feb 02 14:51:15 crc kubenswrapper[4869]: W0202 14:51:15.134967 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod208fe19b_f03b_4a68_b6f2_f9dc3783239e.slice/crio-85c3f98a07f875e0440411e6a9fe0b4c999af39c47897fc60fc2fe822ac894ab WatchSource:0}: Error finding container 85c3f98a07f875e0440411e6a9fe0b4c999af39c47897fc60fc2fe822ac894ab: Status 404 returned error can't find the container with id 85c3f98a07f875e0440411e6a9fe0b4c999af39c47897fc60fc2fe822ac894ab Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.473332 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6166bb6a-5dce-4f45-8e72-80a8677451c1" path="/var/lib/kubelet/pods/6166bb6a-5dce-4f45-8e72-80a8677451c1/volumes" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.474123 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb6a700-f36f-4bad-a670-532f64d03e8d" path="/var/lib/kubelet/pods/ffb6a700-f36f-4bad-a670-532f64d03e8d/volumes" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.513982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"208fe19b-f03b-4a68-b6f2-f9dc3783239e","Type":"ContainerStarted","Data":"85c3f98a07f875e0440411e6a9fe0b4c999af39c47897fc60fc2fe822ac894ab"} Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.515784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bd7dt" event={"ID":"79eb9544-e5e9-455c-94ca-bb36fa6eb873","Type":"ContainerStarted","Data":"1c23fbdda4e59536fefeaef67eb5d8febb2087bd572cafb12a5a3ea2fe0c0860"} Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.815723 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-sr5dv"] Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.817371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.821268 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.854208 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sr5dv"] Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovs-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-combined-ca-bundle\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904650 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovn-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4lgs\" (UniqueName: \"kubernetes.io/projected/2b612893-5e70-472a-a65f-0d0c66f82de3-kube-api-access-n4lgs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b612893-5e70-472a-a65f-0d0c66f82de3-config\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.998171 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovs-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006889 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-combined-ca-bundle\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovn-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4lgs\" (UniqueName: \"kubernetes.io/projected/2b612893-5e70-472a-a65f-0d0c66f82de3-kube-api-access-n4lgs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.007032 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b612893-5e70-472a-a65f-0d0c66f82de3-config\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.008019 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b612893-5e70-472a-a65f-0d0c66f82de3-config\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.010440 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovn-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.010533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovs-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.018328 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-combined-ca-bundle\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.035192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.040378 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.042376 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.052236 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4lgs\" (UniqueName: \"kubernetes.io/projected/2b612893-5e70-472a-a65f-0d0c66f82de3-kube-api-access-n4lgs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.052274 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.120045 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.120347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.120520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.120705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.139697 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.147332 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.229694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.230239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.231006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.231131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.231189 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.231634 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.236192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.282073 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.323338 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.357235 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.359582 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.362539 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.388146 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.422800 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442381 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442424 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442551 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442620 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.526546 4869 generic.go:334] "Generic (PLEG): container finished" podID="0db20771-eb71-4272-9814-ab5bf0fff1fe" containerID="1f043f93bdd75692e3778bb3515619f7b78ac6456cb11303903caa9aa52d1f13" exitCode=0 Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.526594 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0db20771-eb71-4272-9814-ab5bf0fff1fe","Type":"ContainerDied","Data":"1f043f93bdd75692e3778bb3515619f7b78ac6456cb11303903caa9aa52d1f13"} Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545475 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.547740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.547754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.548496 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.550424 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.580082 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.705967 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.402782 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.472776 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") pod \"8b641090-1ff7-4058-9633-de20ec70c671\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.472861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") pod \"8b641090-1ff7-4058-9633-de20ec70c671\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.473150 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") pod \"8b641090-1ff7-4058-9633-de20ec70c671\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.473520 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b641090-1ff7-4058-9633-de20ec70c671" (UID: "8b641090-1ff7-4058-9633-de20ec70c671"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.475284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config" (OuterVolumeSpecName: "config") pod "8b641090-1ff7-4058-9633-de20ec70c671" (UID: "8b641090-1ff7-4058-9633-de20ec70c671"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.476397 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.476423 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.476696 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c" (OuterVolumeSpecName: "kube-api-access-2fd4c") pod "8b641090-1ff7-4058-9633-de20ec70c671" (UID: "8b641090-1ff7-4058-9633-de20ec70c671"). InnerVolumeSpecName "kube-api-access-2fd4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.542621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" event={"ID":"8b641090-1ff7-4058-9633-de20ec70c671","Type":"ContainerDied","Data":"29623a0a20d0d3f426297d37f9c2d0abf87beb1dfbc32ce1bbed40778e70b8b2"} Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.542729 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.585109 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.594966 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.605926 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.837849 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993066 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") pod \"84f2e276-a4a3-4992-aadc-e6e4e259feea\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993252 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") pod \"84f2e276-a4a3-4992-aadc-e6e4e259feea\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993336 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") pod \"84f2e276-a4a3-4992-aadc-e6e4e259feea\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993782 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config" (OuterVolumeSpecName: "config") pod "84f2e276-a4a3-4992-aadc-e6e4e259feea" (UID: "84f2e276-a4a3-4992-aadc-e6e4e259feea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993895 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84f2e276-a4a3-4992-aadc-e6e4e259feea" (UID: "84f2e276-a4a3-4992-aadc-e6e4e259feea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.997767 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d" (OuterVolumeSpecName: "kube-api-access-scf4d") pod "84f2e276-a4a3-4992-aadc-e6e4e259feea" (UID: "84f2e276-a4a3-4992-aadc-e6e4e259feea"). InnerVolumeSpecName "kube-api-access-scf4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.095516 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.095566 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.095579 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.550717 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.550711 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" event={"ID":"84f2e276-a4a3-4992-aadc-e6e4e259feea","Type":"ContainerDied","Data":"71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a"} Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.553478 4869 generic.go:334] "Generic (PLEG): container finished" podID="4287f1a9-b523-48a9-a999-fc8f34b212a4" containerID="afb1cbeab983d6b4b46ae44495de0b332c18b10393223bd85665c1538577edab" exitCode=0 Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.553514 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4287f1a9-b523-48a9-a999-fc8f34b212a4","Type":"ContainerDied","Data":"afb1cbeab983d6b4b46ae44495de0b332c18b10393223bd85665c1538577edab"} Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.626434 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.627270 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.215272 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sr5dv"] Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.279635 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:51:19 crc kubenswrapper[4869]: W0202 14:51:19.323048 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54b21918_ca4b_429c_8a6e_dd4bb0240efd.slice/crio-ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8 WatchSource:0}: Error finding container ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8: Status 404 returned error can't find the container with id ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8 Feb 02 14:51:19 crc kubenswrapper[4869]: W0202 14:51:19.326497 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b612893_5e70_472a_a65f_0d0c66f82de3.slice/crio-409947a8fbc3343c46a6c3250844294a0320637f3bc7d4482299456181ae9b79 WatchSource:0}: Error finding container 409947a8fbc3343c46a6c3250844294a0320637f3bc7d4482299456181ae9b79: Status 404 returned error can't find the container with id 409947a8fbc3343c46a6c3250844294a0320637f3bc7d4482299456181ae9b79 Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.405431 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.479072 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f2e276-a4a3-4992-aadc-e6e4e259feea" path="/var/lib/kubelet/pods/84f2e276-a4a3-4992-aadc-e6e4e259feea/volumes" Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.483585 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b641090-1ff7-4058-9633-de20ec70c671" path="/var/lib/kubelet/pods/8b641090-1ff7-4058-9633-de20ec70c671/volumes" Feb 02 14:51:19 crc kubenswrapper[4869]: W0202 14:51:19.558059 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cf07564_1cdf_4897_be34_68c8d9ec7534.slice/crio-1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff WatchSource:0}: Error finding container 1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff: Status 404 returned error can't find the container with id 1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.568476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerStarted","Data":"ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8"} Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.570326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sr5dv" event={"ID":"2b612893-5e70-472a-a65f-0d0c66f82de3","Type":"ContainerStarted","Data":"409947a8fbc3343c46a6c3250844294a0320637f3bc7d4482299456181ae9b79"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.580131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerStarted","Data":"1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.588129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1078d20a-9d7e-45ef-8be5-bade239489c4","Type":"ContainerStarted","Data":"8624dc0f6e5aef1937a45574b4039005c89f64cb76b90fd3084680864b7a8ca5"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.588788 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.595706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4287f1a9-b523-48a9-a999-fc8f34b212a4","Type":"ContainerStarted","Data":"c4c71a1806a7cf6c12be9dc691b40d12aac113502b11ac27efe26b925b9ca279"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.606312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74" event={"ID":"d51425d7-d30c-466d-b478-17a637e3ef9f","Type":"ContainerStarted","Data":"31b2aa396592de0711b171e3fde6e94effe4a619e90cae985d7379ddab85267b"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.626403 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-f7z74" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.634184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"208fe19b-f03b-4a68-b6f2-f9dc3783239e","Type":"ContainerStarted","Data":"7172f4ff4f290db088a1c5719f2d94b3e2c65c93bba4fc500c4ca093e634bac4"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.646160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0db20771-eb71-4272-9814-ab5bf0fff1fe","Type":"ContainerStarted","Data":"e252241fcc57d3472614846ec2db93657f20d57c65957a0c1b70f834aff8f9aa"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.654149 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.508390013 podStartE2EDuration="33.654119842s" podCreationTimestamp="2026-02-02 14:50:47 +0000 UTC" firstStartedPulling="2026-02-02 14:51:12.140228097 +0000 UTC m=+1073.784864877" lastFinishedPulling="2026-02-02 14:51:18.285957936 +0000 UTC m=+1079.930594706" observedRunningTime="2026-02-02 14:51:20.61852072 +0000 UTC m=+1082.263157510" watchObservedRunningTime="2026-02-02 14:51:20.654119842 +0000 UTC m=+1082.298756612" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.677296 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.677266474 podStartE2EDuration="35.677266474s" podCreationTimestamp="2026-02-02 14:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:51:20.664792356 +0000 UTC m=+1082.309429136" watchObservedRunningTime="2026-02-02 14:51:20.677266474 +0000 UTC m=+1082.321903244" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.694339 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-f7z74" podStartSLOduration=22.61538358 podStartE2EDuration="28.694306907s" podCreationTimestamp="2026-02-02 14:50:52 +0000 UTC" firstStartedPulling="2026-02-02 14:51:12.655241316 +0000 UTC m=+1074.299878086" lastFinishedPulling="2026-02-02 14:51:18.734164643 +0000 UTC m=+1080.378801413" observedRunningTime="2026-02-02 14:51:20.689646521 +0000 UTC m=+1082.334283301" watchObservedRunningTime="2026-02-02 14:51:20.694306907 +0000 UTC m=+1082.338943697" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.718551 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.950922193 podStartE2EDuration="36.718528896s" podCreationTimestamp="2026-02-02 14:50:44 +0000 UTC" firstStartedPulling="2026-02-02 14:50:52.443269473 +0000 UTC m=+1054.087906243" lastFinishedPulling="2026-02-02 14:51:12.210876176 +0000 UTC m=+1073.855512946" observedRunningTime="2026-02-02 14:51:20.71543087 +0000 UTC m=+1082.360067630" watchObservedRunningTime="2026-02-02 14:51:20.718528896 +0000 UTC m=+1082.363165666" Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.657099 4869 generic.go:334] "Generic (PLEG): container finished" podID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerID="7819a6f12b4ee4b2e0e6548b9439122ce17a185d8262e570c2db8127e890e849" exitCode=0 Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.657209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerDied","Data":"7819a6f12b4ee4b2e0e6548b9439122ce17a185d8262e570c2db8127e890e849"} Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.661069 4869 generic.go:334] "Generic (PLEG): container finished" podID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerID="d4bc95d2879e70b645a2e7e235f1fbdcdf5fe19a1ef7176a88d572c086b1c57b" exitCode=0 Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.661154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerDied","Data":"d4bc95d2879e70b645a2e7e235f1fbdcdf5fe19a1ef7176a88d572c086b1c57b"} Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.663984 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerStarted","Data":"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7"} Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.666295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9a1c388-0473-4284-9a2c-09e3d97858f2","Type":"ContainerStarted","Data":"1537b682b197cf64754fc557947db1b13d8d218e2346b3868478942db4c7b9eb"} Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.668827 4869 generic.go:334] "Generic (PLEG): container finished" podID="79eb9544-e5e9-455c-94ca-bb36fa6eb873" containerID="a581fb6071039795143b024e23ba0276e0285d6df07b1b2559bd3e81a25e5819" exitCode=0 Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.668892 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bd7dt" event={"ID":"79eb9544-e5e9-455c-94ca-bb36fa6eb873","Type":"ContainerDied","Data":"a581fb6071039795143b024e23ba0276e0285d6df07b1b2559bd3e81a25e5819"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.689065 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerStarted","Data":"63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.689976 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.691120 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"208fe19b-f03b-4a68-b6f2-f9dc3783239e","Type":"ContainerStarted","Data":"2b6e8cf0074a3e1b10b9838ac29e513619e8774be1c3be6cc3a2358e37722d5b"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.693062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerStarted","Data":"b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.693163 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.695532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9a1c388-0473-4284-9a2c-09e3d97858f2","Type":"ContainerStarted","Data":"2aa03bb95ca126ad4f0aa8e30199b4e48a973bd950b896167ae7da8fb2b11935"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.697481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"52d7887e-0487-4179-a0af-6f51b9eed8e7","Type":"ContainerStarted","Data":"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.697784 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.699354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sr5dv" event={"ID":"2b612893-5e70-472a-a65f-0d0c66f82de3","Type":"ContainerStarted","Data":"bf94e3195500303a722179095cae6bf7f79a08cad1f791832b07ed7d953faa63"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.701771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bd7dt" event={"ID":"79eb9544-e5e9-455c-94ca-bb36fa6eb873","Type":"ContainerStarted","Data":"40be43190fd4cc09839c1b1e0bfd2813fa6b14c34c62ec45073d527453d84427"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.701803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bd7dt" event={"ID":"79eb9544-e5e9-455c-94ca-bb36fa6eb873","Type":"ContainerStarted","Data":"25a4ea77a1c455d146e841e2467b8fad7f941ee565a4984a83bee500a38e7c08"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.702037 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.702095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.711337 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.711392 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.722145 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" podStartSLOduration=7.119713408 podStartE2EDuration="7.722120881s" podCreationTimestamp="2026-02-02 14:51:16 +0000 UTC" firstStartedPulling="2026-02-02 14:51:19.575294985 +0000 UTC m=+1081.219931745" lastFinishedPulling="2026-02-02 14:51:20.177702448 +0000 UTC m=+1081.822339218" observedRunningTime="2026-02-02 14:51:23.717321943 +0000 UTC m=+1085.361958723" watchObservedRunningTime="2026-02-02 14:51:23.722120881 +0000 UTC m=+1085.366757651" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.741956 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.583913161 podStartE2EDuration="28.741935192s" podCreationTimestamp="2026-02-02 14:50:55 +0000 UTC" firstStartedPulling="2026-02-02 14:51:13.287481937 +0000 UTC m=+1074.932118707" lastFinishedPulling="2026-02-02 14:51:22.445503968 +0000 UTC m=+1084.090140738" observedRunningTime="2026-02-02 14:51:23.736146518 +0000 UTC m=+1085.380783298" watchObservedRunningTime="2026-02-02 14:51:23.741935192 +0000 UTC m=+1085.386571972" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.764633 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.769473 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bd7dt" podStartSLOduration=27.472903761 podStartE2EDuration="31.769448713s" podCreationTimestamp="2026-02-02 14:50:52 +0000 UTC" firstStartedPulling="2026-02-02 14:51:15.142859639 +0000 UTC m=+1076.787496409" lastFinishedPulling="2026-02-02 14:51:19.439404601 +0000 UTC m=+1081.084041361" observedRunningTime="2026-02-02 14:51:23.76166355 +0000 UTC m=+1085.406300320" watchObservedRunningTime="2026-02-02 14:51:23.769448713 +0000 UTC m=+1085.414085493" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.785073 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=26.507294874 podStartE2EDuration="35.785052879s" podCreationTimestamp="2026-02-02 14:50:48 +0000 UTC" firstStartedPulling="2026-02-02 14:51:13.128952383 +0000 UTC m=+1074.773589153" lastFinishedPulling="2026-02-02 14:51:22.406710368 +0000 UTC m=+1084.051347158" observedRunningTime="2026-02-02 14:51:23.78020551 +0000 UTC m=+1085.424842290" watchObservedRunningTime="2026-02-02 14:51:23.785052879 +0000 UTC m=+1085.429689659" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.826294 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.552339709 podStartE2EDuration="31.826275339s" podCreationTimestamp="2026-02-02 14:50:52 +0000 UTC" firstStartedPulling="2026-02-02 14:51:15.142610492 +0000 UTC m=+1076.787247262" lastFinishedPulling="2026-02-02 14:51:22.416546102 +0000 UTC m=+1084.061182892" observedRunningTime="2026-02-02 14:51:23.811963545 +0000 UTC m=+1085.456600325" watchObservedRunningTime="2026-02-02 14:51:23.826275339 +0000 UTC m=+1085.470912109" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.834717 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-sr5dv" podStartSLOduration=5.77847641 podStartE2EDuration="8.834700048s" podCreationTimestamp="2026-02-02 14:51:15 +0000 UTC" firstStartedPulling="2026-02-02 14:51:19.350434839 +0000 UTC m=+1080.995071609" lastFinishedPulling="2026-02-02 14:51:22.406658477 +0000 UTC m=+1084.051295247" observedRunningTime="2026-02-02 14:51:23.829746486 +0000 UTC m=+1085.474383256" watchObservedRunningTime="2026-02-02 14:51:23.834700048 +0000 UTC m=+1085.479336818" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.867710 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-4c4vl" podStartSLOduration=7.017610681 podStartE2EDuration="7.867683475s" podCreationTimestamp="2026-02-02 14:51:16 +0000 UTC" firstStartedPulling="2026-02-02 14:51:19.325791348 +0000 UTC m=+1080.970428118" lastFinishedPulling="2026-02-02 14:51:20.175864152 +0000 UTC m=+1081.820500912" observedRunningTime="2026-02-02 14:51:23.850804947 +0000 UTC m=+1085.495441737" watchObservedRunningTime="2026-02-02 14:51:23.867683475 +0000 UTC m=+1085.512320245" Feb 02 14:51:25 crc kubenswrapper[4869]: I0202 14:51:25.775388 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.079862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.079948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.219705 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.461819 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.461996 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.502634 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.786998 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.914706 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.971449 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.973539 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.978449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.982540 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.984290 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-shdlb" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.997435 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.008015 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080565 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080682 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ssl\" (UniqueName: \"kubernetes.io/projected/f502e55d-56a7-4238-b2cc-46a4c2eb3945-kube-api-access-82ssl\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080772 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.081019 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-config\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.081086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-scripts\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.183744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ssl\" (UniqueName: \"kubernetes.io/projected/f502e55d-56a7-4238-b2cc-46a4c2eb3945-kube-api-access-82ssl\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.183871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.183936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.183987 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-config\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.184036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-scripts\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.184122 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.184162 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.184932 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.185331 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-scripts\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.185524 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-config\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.193646 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.203231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.207634 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.210854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ssl\" (UniqueName: \"kubernetes.io/projected/f502e55d-56a7-4238-b2cc-46a4c2eb3945-kube-api-access-82ssl\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.299515 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.299581 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.301073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.414203 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.424358 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.425630 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.441740 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.443611 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.448554 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.469079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.489460 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.489655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.489700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.489960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.493259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.503630 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.602476 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.603175 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.603235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.603293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.605961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.622251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.646504 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.650893 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.738793 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.740323 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.752036 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.763735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.777683 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.825494 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.825572 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.839251 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.844604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.848157 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.858821 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.927371 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.927438 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.929364 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.938900 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.955242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.029521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.029572 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.045197 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.068343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.132022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.132074 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.133285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.163657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.170423 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.422394 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.697733 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 14:51:28 crc kubenswrapper[4869]: W0202 14:51:28.700358 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57ed4541_0cbb_4412_b054_fe72923fc2ba.slice/crio-768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65 WatchSource:0}: Error finding container 768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65: Status 404 returned error can't find the container with id 768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65 Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.761643 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f502e55d-56a7-4238-b2cc-46a4c2eb3945","Type":"ContainerStarted","Data":"10831d4bcc622b0b7eb940eb7a1486f3ca8b2ca5db0102460ed44c44902a850d"} Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.770011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqz6l" event={"ID":"2cae9d7b-b1d0-4745-801d-14b5f1e5f959","Type":"ContainerStarted","Data":"df71e565c4a1044f26889a098a902ff1f6378130dffa835480e68b3744d9258f"} Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.770056 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqz6l" event={"ID":"2cae9d7b-b1d0-4745-801d-14b5f1e5f959","Type":"ContainerStarted","Data":"fe14be75a1800d62e9b67cddf1c8c2e5476e5e2b193631d4ce38d708f24a91ca"} Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.779865 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-de8f-account-create-update-7gxr8" event={"ID":"57ed4541-0cbb-4412-b054-fe72923fc2ba","Type":"ContainerStarted","Data":"768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65"} Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.788434 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-hqz6l" podStartSLOduration=1.788413399 podStartE2EDuration="1.788413399s" podCreationTimestamp="2026-02-02 14:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:51:28.786942613 +0000 UTC m=+1090.431579383" watchObservedRunningTime="2026-02-02 14:51:28.788413399 +0000 UTC m=+1090.433050169" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.916790 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.040590 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 14:51:29 crc kubenswrapper[4869]: W0202 14:51:29.048304 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod667b6a5a_a090_407f_a4c1_229be7db4fbc.slice/crio-50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def WatchSource:0}: Error finding container 50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def: Status 404 returned error can't find the container with id 50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.245099 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.793108 4869 generic.go:334] "Generic (PLEG): container finished" podID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" containerID="df71e565c4a1044f26889a098a902ff1f6378130dffa835480e68b3744d9258f" exitCode=0 Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.793210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqz6l" event={"ID":"2cae9d7b-b1d0-4745-801d-14b5f1e5f959","Type":"ContainerDied","Data":"df71e565c4a1044f26889a098a902ff1f6378130dffa835480e68b3744d9258f"} Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.794613 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-775d-account-create-update-mc2f8" event={"ID":"667b6a5a-a090-407f-a4c1-229be7db4fbc","Type":"ContainerStarted","Data":"50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def"} Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.797314 4869 generic.go:334] "Generic (PLEG): container finished" podID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" containerID="d6f5aeb4cb8e140e0ec76f751f66f1f3334b226154def23e06d3735565e7a00e" exitCode=0 Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.797481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6nfjx" event={"ID":"fc85b87e-a9f7-4407-8f88-59b46f424fe5","Type":"ContainerDied","Data":"d6f5aeb4cb8e140e0ec76f751f66f1f3334b226154def23e06d3735565e7a00e"} Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.797516 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6nfjx" event={"ID":"fc85b87e-a9f7-4407-8f88-59b46f424fe5","Type":"ContainerStarted","Data":"88ab34f5cb79551510be237f75a59a62a97ace89c907b1652139d4ddbf0f2615"} Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.799850 4869 generic.go:334] "Generic (PLEG): container finished" podID="57ed4541-0cbb-4412-b054-fe72923fc2ba" containerID="78a897732627685686d46c9cdceda0daa9d9401b96294c575ac6408193fb1e9d" exitCode=0 Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.799931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-de8f-account-create-update-7gxr8" event={"ID":"57ed4541-0cbb-4412-b054-fe72923fc2ba","Type":"ContainerDied","Data":"78a897732627685686d46c9cdceda0daa9d9401b96294c575ac6408193fb1e9d"} Feb 02 14:51:30 crc kubenswrapper[4869]: I0202 14:51:30.811976 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f502e55d-56a7-4238-b2cc-46a4c2eb3945","Type":"ContainerStarted","Data":"c2898c29c7ac00e9470327dfac98457f4ec58d0bc1ca81d493d5f1b2e5424cb4"} Feb 02 14:51:30 crc kubenswrapper[4869]: I0202 14:51:30.814503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-775d-account-create-update-mc2f8" event={"ID":"667b6a5a-a090-407f-a4c1-229be7db4fbc","Type":"ContainerStarted","Data":"6bee5e75e372cb2aba6043898d69e0608376d17242ffd94d857f28f9662a9176"} Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.226978 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.322950 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") pod \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.323127 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") pod \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.324629 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc85b87e-a9f7-4407-8f88-59b46f424fe5" (UID: "fc85b87e-a9f7-4407-8f88-59b46f424fe5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.335346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz" (OuterVolumeSpecName: "kube-api-access-88gjz") pod "fc85b87e-a9f7-4407-8f88-59b46f424fe5" (UID: "fc85b87e-a9f7-4407-8f88-59b46f424fe5"). InnerVolumeSpecName "kube-api-access-88gjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.396979 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.404598 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.426173 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.426233 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.427189 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.527648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") pod \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.527882 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") pod \"57ed4541-0cbb-4412-b054-fe72923fc2ba\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.528018 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") pod \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.528141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") pod \"57ed4541-0cbb-4412-b054-fe72923fc2ba\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.528811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cae9d7b-b1d0-4745-801d-14b5f1e5f959" (UID: "2cae9d7b-b1d0-4745-801d-14b5f1e5f959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.530153 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57ed4541-0cbb-4412-b054-fe72923fc2ba" (UID: "57ed4541-0cbb-4412-b054-fe72923fc2ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.544351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6" (OuterVolumeSpecName: "kube-api-access-7n7j6") pod "2cae9d7b-b1d0-4745-801d-14b5f1e5f959" (UID: "2cae9d7b-b1d0-4745-801d-14b5f1e5f959"). InnerVolumeSpecName "kube-api-access-7n7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.549751 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v" (OuterVolumeSpecName: "kube-api-access-4rv6v") pod "57ed4541-0cbb-4412-b054-fe72923fc2ba" (UID: "57ed4541-0cbb-4412-b054-fe72923fc2ba"). InnerVolumeSpecName "kube-api-access-4rv6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.631187 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.631246 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.631265 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.631278 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.709134 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.799422 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.825927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqz6l" event={"ID":"2cae9d7b-b1d0-4745-801d-14b5f1e5f959","Type":"ContainerDied","Data":"fe14be75a1800d62e9b67cddf1c8c2e5476e5e2b193631d4ce38d708f24a91ca"} Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.825981 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe14be75a1800d62e9b67cddf1c8c2e5476e5e2b193631d4ce38d708f24a91ca" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.826095 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.829212 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.829619 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6nfjx" event={"ID":"fc85b87e-a9f7-4407-8f88-59b46f424fe5","Type":"ContainerDied","Data":"88ab34f5cb79551510be237f75a59a62a97ace89c907b1652139d4ddbf0f2615"} Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.829658 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88ab34f5cb79551510be237f75a59a62a97ace89c907b1652139d4ddbf0f2615" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.835851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-de8f-account-create-update-7gxr8" event={"ID":"57ed4541-0cbb-4412-b054-fe72923fc2ba","Type":"ContainerDied","Data":"768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65"} Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.835939 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.836467 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="dnsmasq-dns" containerID="cri-o://63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d" gracePeriod=10 Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.836705 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:32 crc kubenswrapper[4869]: I0202 14:51:32.846559 4869 generic.go:334] "Generic (PLEG): container finished" podID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerID="63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d" exitCode=0 Feb 02 14:51:32 crc kubenswrapper[4869]: I0202 14:51:32.846660 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerDied","Data":"63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d"} Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.594979 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:34 crc kubenswrapper[4869]: E0202 14:51:34.595862 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ed4541-0cbb-4412-b054-fe72923fc2ba" containerName="mariadb-account-create-update" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.595879 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ed4541-0cbb-4412-b054-fe72923fc2ba" containerName="mariadb-account-create-update" Feb 02 14:51:34 crc kubenswrapper[4869]: E0202 14:51:34.595896 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.597773 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: E0202 14:51:34.597891 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.597904 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.598267 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.598288 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.598313 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ed4541-0cbb-4412-b054-fe72923fc2ba" containerName="mariadb-account-create-update" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.599072 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.602882 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.620067 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.693007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.693270 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.796345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.795216 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.796522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.824610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.925283 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.386106 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.870820 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rw49p" event={"ID":"6b49613f-eb42-441c-a98e-651ac383358e","Type":"ContainerStarted","Data":"c0eba43d199f953d9626b7c88c284ea5aa7158b0c7b330e5e8b9495c554b8a8e"} Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.871320 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rw49p" event={"ID":"6b49613f-eb42-441c-a98e-651ac383358e","Type":"ContainerStarted","Data":"2aa604e3dfd2060c4fc58fbd9ba211d90108d9d1fb97d4ced519b6388e7d6bc1"} Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.872422 4869 generic.go:334] "Generic (PLEG): container finished" podID="667b6a5a-a090-407f-a4c1-229be7db4fbc" containerID="6bee5e75e372cb2aba6043898d69e0608376d17242ffd94d857f28f9662a9176" exitCode=0 Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.872447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-775d-account-create-update-mc2f8" event={"ID":"667b6a5a-a090-407f-a4c1-229be7db4fbc","Type":"ContainerDied","Data":"6bee5e75e372cb2aba6043898d69e0608376d17242ffd94d857f28f9662a9176"} Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.398127 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.571347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") pod \"2cf07564-1cdf-4897-be34-68c8d9ec7534\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.571480 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") pod \"2cf07564-1cdf-4897-be34-68c8d9ec7534\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.571511 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") pod \"2cf07564-1cdf-4897-be34-68c8d9ec7534\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.571562 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") pod \"2cf07564-1cdf-4897-be34-68c8d9ec7534\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.578421 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv" (OuterVolumeSpecName: "kube-api-access-pffdv") pod "2cf07564-1cdf-4897-be34-68c8d9ec7534" (UID: "2cf07564-1cdf-4897-be34-68c8d9ec7534"). InnerVolumeSpecName "kube-api-access-pffdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.616786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config" (OuterVolumeSpecName: "config") pod "2cf07564-1cdf-4897-be34-68c8d9ec7534" (UID: "2cf07564-1cdf-4897-be34-68c8d9ec7534"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.622580 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cf07564-1cdf-4897-be34-68c8d9ec7534" (UID: "2cf07564-1cdf-4897-be34-68c8d9ec7534"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.624560 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cf07564-1cdf-4897-be34-68c8d9ec7534" (UID: "2cf07564-1cdf-4897-be34-68c8d9ec7534"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.675962 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.676248 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.676342 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.676429 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.883633 4869 generic.go:334] "Generic (PLEG): container finished" podID="6b49613f-eb42-441c-a98e-651ac383358e" containerID="c0eba43d199f953d9626b7c88c284ea5aa7158b0c7b330e5e8b9495c554b8a8e" exitCode=0 Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.883726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rw49p" event={"ID":"6b49613f-eb42-441c-a98e-651ac383358e","Type":"ContainerDied","Data":"c0eba43d199f953d9626b7c88c284ea5aa7158b0c7b330e5e8b9495c554b8a8e"} Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.887098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerDied","Data":"1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff"} Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.887137 4869 scope.go:117] "RemoveContainer" containerID="63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.887141 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.891317 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f502e55d-56a7-4238-b2cc-46a4c2eb3945","Type":"ContainerStarted","Data":"c87574b3c52a146aab94e0f857bb893569a9afb1c8ab1319d43693e7c4a95500"} Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.891360 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.917061 4869 scope.go:117] "RemoveContainer" containerID="7819a6f12b4ee4b2e0e6548b9439122ce17a185d8262e570c2db8127e890e849" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.963868 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=9.460689572 podStartE2EDuration="10.963837443s" podCreationTimestamp="2026-02-02 14:51:26 +0000 UTC" firstStartedPulling="2026-02-02 14:51:28.061394682 +0000 UTC m=+1089.706031462" lastFinishedPulling="2026-02-02 14:51:29.564542563 +0000 UTC m=+1091.209179333" observedRunningTime="2026-02-02 14:51:36.931100113 +0000 UTC m=+1098.575736893" watchObservedRunningTime="2026-02-02 14:51:36.963837443 +0000 UTC m=+1098.608474213" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.003182 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.011360 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.015758 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 14:51:37 crc kubenswrapper[4869]: E0202 14:51:37.016482 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="init" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.016507 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="init" Feb 02 14:51:37 crc kubenswrapper[4869]: E0202 14:51:37.016535 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="dnsmasq-dns" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.016548 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="dnsmasq-dns" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.016745 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="dnsmasq-dns" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.017750 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.022405 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.079850 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.081382 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.083845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.083989 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.084171 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.093525 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.185793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.186300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.186352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.186480 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.187159 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.213691 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.287539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.287711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.288719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.307796 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.332584 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.332695 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.401652 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.475415 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" path="/var/lib/kubelet/pods/2cf07564-1cdf-4897-be34-68c8d9ec7534/volumes" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.492481 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") pod \"667b6a5a-a090-407f-a4c1-229be7db4fbc\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.492815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") pod \"667b6a5a-a090-407f-a4c1-229be7db4fbc\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.493564 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "667b6a5a-a090-407f-a4c1-229be7db4fbc" (UID: "667b6a5a-a090-407f-a4c1-229be7db4fbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.502946 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp" (OuterVolumeSpecName: "kube-api-access-gfplp") pod "667b6a5a-a090-407f-a4c1-229be7db4fbc" (UID: "667b6a5a-a090-407f-a4c1-229be7db4fbc"). InnerVolumeSpecName "kube-api-access-gfplp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.594605 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.595067 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.797381 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 14:51:37 crc kubenswrapper[4869]: W0202 14:51:37.800779 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod663a2e70_1d18_41b3_bc31_7e8b44f00450.slice/crio-8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105 WatchSource:0}: Error finding container 8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105: Status 404 returned error can't find the container with id 8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105 Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.911805 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.911820 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-775d-account-create-update-mc2f8" event={"ID":"667b6a5a-a090-407f-a4c1-229be7db4fbc","Type":"ContainerDied","Data":"50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def"} Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.911859 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.916401 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.920054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqbqn" event={"ID":"663a2e70-1d18-41b3-bc31-7e8b44f00450","Type":"ContainerStarted","Data":"8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105"} Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.273562 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.312418 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") pod \"6b49613f-eb42-441c-a98e-651ac383358e\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.312518 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") pod \"6b49613f-eb42-441c-a98e-651ac383358e\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.313968 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b49613f-eb42-441c-a98e-651ac383358e" (UID: "6b49613f-eb42-441c-a98e-651ac383358e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.321265 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2" (OuterVolumeSpecName: "kube-api-access-cfdv2") pod "6b49613f-eb42-441c-a98e-651ac383358e" (UID: "6b49613f-eb42-441c-a98e-651ac383358e"). InnerVolumeSpecName "kube-api-access-cfdv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.414653 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.414701 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.947676 4869 generic.go:334] "Generic (PLEG): container finished" podID="663a2e70-1d18-41b3-bc31-7e8b44f00450" containerID="6d8d94685f54694bdd3d654fd30340b20f11060d58afcb8b6db65cc019ab404b" exitCode=0 Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.948123 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqbqn" event={"ID":"663a2e70-1d18-41b3-bc31-7e8b44f00450","Type":"ContainerDied","Data":"6d8d94685f54694bdd3d654fd30340b20f11060d58afcb8b6db65cc019ab404b"} Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.964284 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rw49p" event={"ID":"6b49613f-eb42-441c-a98e-651ac383358e","Type":"ContainerDied","Data":"2aa604e3dfd2060c4fc58fbd9ba211d90108d9d1fb97d4ced519b6388e7d6bc1"} Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.964354 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa604e3dfd2060c4fc58fbd9ba211d90108d9d1fb97d4ced519b6388e7d6bc1" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.964458 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.997180 4869 generic.go:334] "Generic (PLEG): container finished" podID="695a8791-53fd-414d-af01-753483223d32" containerID="9b15642290472abfbc4ace64421c6af055e5988041270bd6769c924998672a78" exitCode=0 Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.997240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66c2-account-create-update-m2vvf" event={"ID":"695a8791-53fd-414d-af01-753483223d32","Type":"ContainerDied","Data":"9b15642290472abfbc4ace64421c6af055e5988041270bd6769c924998672a78"} Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.997268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66c2-account-create-update-m2vvf" event={"ID":"695a8791-53fd-414d-af01-753483223d32","Type":"ContainerStarted","Data":"d4f078817dc98e4b14dcc6bdd60ef30263955ff02a7ab4a8c067ddb673feb707"} Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.434073 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.442844 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.552628 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") pod \"663a2e70-1d18-41b3-bc31-7e8b44f00450\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.552731 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") pod \"695a8791-53fd-414d-af01-753483223d32\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.552792 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") pod \"695a8791-53fd-414d-af01-753483223d32\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.552885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") pod \"663a2e70-1d18-41b3-bc31-7e8b44f00450\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.555214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "663a2e70-1d18-41b3-bc31-7e8b44f00450" (UID: "663a2e70-1d18-41b3-bc31-7e8b44f00450"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.557465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "695a8791-53fd-414d-af01-753483223d32" (UID: "695a8791-53fd-414d-af01-753483223d32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.563818 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd" (OuterVolumeSpecName: "kube-api-access-8f4bd") pod "695a8791-53fd-414d-af01-753483223d32" (UID: "695a8791-53fd-414d-af01-753483223d32"). InnerVolumeSpecName "kube-api-access-8f4bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.576441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp" (OuterVolumeSpecName: "kube-api-access-ch6kp") pod "663a2e70-1d18-41b3-bc31-7e8b44f00450" (UID: "663a2e70-1d18-41b3-bc31-7e8b44f00450"). InnerVolumeSpecName "kube-api-access-ch6kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.658635 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.659594 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.659695 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.659813 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.796657 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.803287 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.014612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66c2-account-create-update-m2vvf" event={"ID":"695a8791-53fd-414d-af01-753483223d32","Type":"ContainerDied","Data":"d4f078817dc98e4b14dcc6bdd60ef30263955ff02a7ab4a8c067ddb673feb707"} Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.014656 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4f078817dc98e4b14dcc6bdd60ef30263955ff02a7ab4a8c067ddb673feb707" Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.014655 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.017282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqbqn" event={"ID":"663a2e70-1d18-41b3-bc31-7e8b44f00450","Type":"ContainerDied","Data":"8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105"} Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.017329 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105" Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.017382 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.480235 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b49613f-eb42-441c-a98e-651ac383358e" path="/var/lib/kubelet/pods/6b49613f-eb42-441c-a98e-651ac383358e/volumes" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.011813 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 14:51:43 crc kubenswrapper[4869]: E0202 14:51:43.012426 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695a8791-53fd-414d-af01-753483223d32" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.012480 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="695a8791-53fd-414d-af01-753483223d32" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: E0202 14:51:43.012553 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663a2e70-1d18-41b3-bc31-7e8b44f00450" containerName="mariadb-database-create" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.012559 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="663a2e70-1d18-41b3-bc31-7e8b44f00450" containerName="mariadb-database-create" Feb 02 14:51:43 crc kubenswrapper[4869]: E0202 14:51:43.012568 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b49613f-eb42-441c-a98e-651ac383358e" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.012575 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b49613f-eb42-441c-a98e-651ac383358e" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: E0202 14:51:43.012667 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667b6a5a-a090-407f-a4c1-229be7db4fbc" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.012674 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="667b6a5a-a090-407f-a4c1-229be7db4fbc" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013076 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="663a2e70-1d18-41b3-bc31-7e8b44f00450" containerName="mariadb-database-create" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013096 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b49613f-eb42-441c-a98e-651ac383358e" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013108 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="667b6a5a-a090-407f-a4c1-229be7db4fbc" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013118 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="695a8791-53fd-414d-af01-753483223d32" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013867 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.016779 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q8bdk" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.017594 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.030348 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.107557 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.107665 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.107696 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.107725 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.209207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.209296 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.209330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.209413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.217159 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.217185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.218189 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.231021 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.344024 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.903239 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 14:51:44 crc kubenswrapper[4869]: I0202 14:51:44.054783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmqdp" event={"ID":"8d01d875-1fd0-4d36-9077-337e2549b17c","Type":"ContainerStarted","Data":"99b5ca7935cfbc4a1d283bd53d5a36a9759bf57b988d18b5c8f5c459c5a63c51"} Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.844183 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.845715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.853085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.871293 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.980399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.980493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.077229 4869 generic.go:334] "Generic (PLEG): container finished" podID="95035071-a194-40ba-9b64-700ae3121dc4" containerID="5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93" exitCode=0 Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.077299 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerDied","Data":"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93"} Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.082458 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.082519 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.083398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.119199 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.166683 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.693436 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.089105 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qx9sp" event={"ID":"cedd0523-58d4-494f-9d04-76029ad9ca4d","Type":"ContainerStarted","Data":"1e93de4900a661d5dcfe910c46bd9a967faddfa20ef1e38b79c228fa5ebb022d"} Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.089179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qx9sp" event={"ID":"cedd0523-58d4-494f-9d04-76029ad9ca4d","Type":"ContainerStarted","Data":"2266dabb7f1c8e39cd8c38e3bb443e87550af12cc90d1334e4f69e4a7048fa16"} Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.092429 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerStarted","Data":"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa"} Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.092714 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.110857 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qx9sp" podStartSLOduration=2.110837486 podStartE2EDuration="2.110837486s" podCreationTimestamp="2026-02-02 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:51:47.10414969 +0000 UTC m=+1108.748786460" watchObservedRunningTime="2026-02-02 14:51:47.110837486 +0000 UTC m=+1108.755474256" Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.130019 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.981233849 podStartE2EDuration="1m4.129983999s" podCreationTimestamp="2026-02-02 14:50:43 +0000 UTC" firstStartedPulling="2026-02-02 14:50:45.990293187 +0000 UTC m=+1047.634929957" lastFinishedPulling="2026-02-02 14:51:12.139043337 +0000 UTC m=+1073.783680107" observedRunningTime="2026-02-02 14:51:47.127739104 +0000 UTC m=+1108.772375874" watchObservedRunningTime="2026-02-02 14:51:47.129983999 +0000 UTC m=+1108.774620769" Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.368584 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 14:51:48 crc kubenswrapper[4869]: I0202 14:51:48.103135 4869 generic.go:334] "Generic (PLEG): container finished" podID="cedd0523-58d4-494f-9d04-76029ad9ca4d" containerID="1e93de4900a661d5dcfe910c46bd9a967faddfa20ef1e38b79c228fa5ebb022d" exitCode=0 Feb 02 14:51:48 crc kubenswrapper[4869]: I0202 14:51:48.103281 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qx9sp" event={"ID":"cedd0523-58d4-494f-9d04-76029ad9ca4d","Type":"ContainerDied","Data":"1e93de4900a661d5dcfe910c46bd9a967faddfa20ef1e38b79c228fa5ebb022d"} Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.487137 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.560099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") pod \"cedd0523-58d4-494f-9d04-76029ad9ca4d\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.560316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") pod \"cedd0523-58d4-494f-9d04-76029ad9ca4d\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.561043 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cedd0523-58d4-494f-9d04-76029ad9ca4d" (UID: "cedd0523-58d4-494f-9d04-76029ad9ca4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.567953 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x" (OuterVolumeSpecName: "kube-api-access-kwq8x") pod "cedd0523-58d4-494f-9d04-76029ad9ca4d" (UID: "cedd0523-58d4-494f-9d04-76029ad9ca4d"). InnerVolumeSpecName "kube-api-access-kwq8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.663424 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.663774 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:50 crc kubenswrapper[4869]: I0202 14:51:50.129363 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qx9sp" event={"ID":"cedd0523-58d4-494f-9d04-76029ad9ca4d","Type":"ContainerDied","Data":"2266dabb7f1c8e39cd8c38e3bb443e87550af12cc90d1334e4f69e4a7048fa16"} Feb 02 14:51:50 crc kubenswrapper[4869]: I0202 14:51:50.129786 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2266dabb7f1c8e39cd8c38e3bb443e87550af12cc90d1334e4f69e4a7048fa16" Feb 02 14:51:50 crc kubenswrapper[4869]: I0202 14:51:50.129402 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:52 crc kubenswrapper[4869]: I0202 14:51:52.891369 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-f7z74" podUID="d51425d7-d30c-466d-b478-17a637e3ef9f" containerName="ovn-controller" probeResult="failure" output=< Feb 02 14:51:52 crc kubenswrapper[4869]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 14:51:52 crc kubenswrapper[4869]: > Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.010161 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.054889 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.288097 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:51:53 crc kubenswrapper[4869]: E0202 14:51:53.288479 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedd0523-58d4-494f-9d04-76029ad9ca4d" containerName="mariadb-account-create-update" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.288500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedd0523-58d4-494f-9d04-76029ad9ca4d" containerName="mariadb-account-create-update" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.288656 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedd0523-58d4-494f-9d04-76029ad9ca4d" containerName="mariadb-account-create-update" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.289378 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.292359 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.311757 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333554 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333646 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333787 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.436463 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.436946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437035 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437168 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437322 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.439211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.439244 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.440048 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.441356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.462185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.620555 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:54 crc kubenswrapper[4869]: I0202 14:51:54.169723 4869 generic.go:334] "Generic (PLEG): container finished" podID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerID="9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7" exitCode=0 Feb 02 14:51:54 crc kubenswrapper[4869]: I0202 14:51:54.169804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerDied","Data":"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7"} Feb 02 14:51:57 crc kubenswrapper[4869]: I0202 14:51:57.416923 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:51:57 crc kubenswrapper[4869]: I0202 14:51:57.908668 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-f7z74" Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.208455 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" containerID="7ceee7ca0afb25fecb47c7d1ea7c643849b3e2a4371bef94fa2e91ed301777b9" exitCode=0 Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.208554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74-config-lzp54" event={"ID":"d1cce5e8-8297-4595-9c62-8d593ed35b0f","Type":"ContainerDied","Data":"7ceee7ca0afb25fecb47c7d1ea7c643849b3e2a4371bef94fa2e91ed301777b9"} Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.208592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74-config-lzp54" event={"ID":"d1cce5e8-8297-4595-9c62-8d593ed35b0f","Type":"ContainerStarted","Data":"ecf8e6a6d474b5e7476f29ad4ae29e234e11668280caa810ad6939e8040c4054"} Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.210625 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmqdp" event={"ID":"8d01d875-1fd0-4d36-9077-337e2549b17c","Type":"ContainerStarted","Data":"787a10a68dc71dc578d2b7b04e714c6b6fd52e9d48dc7f1b9e14020160b32eec"} Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.215825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerStarted","Data":"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1"} Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.216713 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.274017 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nmqdp" podStartSLOduration=3.199846367 podStartE2EDuration="16.273983463s" podCreationTimestamp="2026-02-02 14:51:42 +0000 UTC" firstStartedPulling="2026-02-02 14:51:43.912329515 +0000 UTC m=+1105.556966285" lastFinishedPulling="2026-02-02 14:51:56.986466611 +0000 UTC m=+1118.631103381" observedRunningTime="2026-02-02 14:51:58.267668627 +0000 UTC m=+1119.912305417" watchObservedRunningTime="2026-02-02 14:51:58.273983463 +0000 UTC m=+1119.918620243" Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.302558 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371961.552246 podStartE2EDuration="1m15.30252928s" podCreationTimestamp="2026-02-02 14:50:43 +0000 UTC" firstStartedPulling="2026-02-02 14:50:45.572792672 +0000 UTC m=+1047.217429442" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:51:58.296520542 +0000 UTC m=+1119.941157332" watchObservedRunningTime="2026-02-02 14:51:58.30252928 +0000 UTC m=+1119.947166050" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.594623 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669320 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669470 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669539 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669581 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669689 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669713 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669802 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.670239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run" (OuterVolumeSpecName: "var-run") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.670245 4869 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.670228 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.670854 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.671478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts" (OuterVolumeSpecName: "scripts") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.676433 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg" (OuterVolumeSpecName: "kube-api-access-s77dg") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "kube-api-access-s77dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.771880 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.772232 4869 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.772244 4869 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.772255 4869 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.772263 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.235835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74-config-lzp54" event={"ID":"d1cce5e8-8297-4595-9c62-8d593ed35b0f","Type":"ContainerDied","Data":"ecf8e6a6d474b5e7476f29ad4ae29e234e11668280caa810ad6939e8040c4054"} Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.235933 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf8e6a6d474b5e7476f29ad4ae29e234e11668280caa810ad6939e8040c4054" Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.236018 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.713124 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.719330 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:52:01 crc kubenswrapper[4869]: I0202 14:52:01.476335 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" path="/var/lib/kubelet/pods/d1cce5e8-8297-4595-9c62-8d593ed35b0f/volumes" Feb 02 14:52:05 crc kubenswrapper[4869]: I0202 14:52:05.217231 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:52:05 crc kubenswrapper[4869]: I0202 14:52:05.282374 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d01d875-1fd0-4d36-9077-337e2549b17c" containerID="787a10a68dc71dc578d2b7b04e714c6b6fd52e9d48dc7f1b9e14020160b32eec" exitCode=0 Feb 02 14:52:05 crc kubenswrapper[4869]: I0202 14:52:05.282441 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmqdp" event={"ID":"8d01d875-1fd0-4d36-9077-337e2549b17c","Type":"ContainerDied","Data":"787a10a68dc71dc578d2b7b04e714c6b6fd52e9d48dc7f1b9e14020160b32eec"} Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.779383 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmqdp" Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.910331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") pod \"8d01d875-1fd0-4d36-9077-337e2549b17c\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.910523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") pod \"8d01d875-1fd0-4d36-9077-337e2549b17c\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.910676 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") pod \"8d01d875-1fd0-4d36-9077-337e2549b17c\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.910701 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") pod \"8d01d875-1fd0-4d36-9077-337e2549b17c\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.918089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8d01d875-1fd0-4d36-9077-337e2549b17c" (UID: "8d01d875-1fd0-4d36-9077-337e2549b17c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.924131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7" (OuterVolumeSpecName: "kube-api-access-959n7") pod "8d01d875-1fd0-4d36-9077-337e2549b17c" (UID: "8d01d875-1fd0-4d36-9077-337e2549b17c"). InnerVolumeSpecName "kube-api-access-959n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.938700 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d01d875-1fd0-4d36-9077-337e2549b17c" (UID: "8d01d875-1fd0-4d36-9077-337e2549b17c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.959670 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data" (OuterVolumeSpecName: "config-data") pod "8d01d875-1fd0-4d36-9077-337e2549b17c" (UID: "8d01d875-1fd0-4d36-9077-337e2549b17c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.014165 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.014228 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.014248 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.014265 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.305368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmqdp" event={"ID":"8d01d875-1fd0-4d36-9077-337e2549b17c","Type":"ContainerDied","Data":"99b5ca7935cfbc4a1d283bd53d5a36a9759bf57b988d18b5c8f5c459c5a63c51"} Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.305723 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99b5ca7935cfbc4a1d283bd53d5a36a9759bf57b988d18b5c8f5c459c5a63c51" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.305927 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmqdp" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.741252 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:07 crc kubenswrapper[4869]: E0202 14:52:07.741732 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d01d875-1fd0-4d36-9077-337e2549b17c" containerName="glance-db-sync" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.741754 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d01d875-1fd0-4d36-9077-337e2549b17c" containerName="glance-db-sync" Feb 02 14:52:07 crc kubenswrapper[4869]: E0202 14:52:07.741792 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" containerName="ovn-config" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.741801 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" containerName="ovn-config" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.742012 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d01d875-1fd0-4d36-9077-337e2549b17c" containerName="glance-db-sync" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.742040 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" containerName="ovn-config" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.743137 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.757752 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933120 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933367 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933439 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035675 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035742 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035805 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.037015 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.037113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.037152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.037863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.057233 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.064332 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.524153 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:09 crc kubenswrapper[4869]: I0202 14:52:09.333515 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerID="bc9dde5f802202af7a85f0bef2eac6285904a7c6caf12c1643635106506e9002" exitCode=0 Feb 02 14:52:09 crc kubenswrapper[4869]: I0202 14:52:09.333572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerDied","Data":"bc9dde5f802202af7a85f0bef2eac6285904a7c6caf12c1643635106506e9002"} Feb 02 14:52:09 crc kubenswrapper[4869]: I0202 14:52:09.333973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerStarted","Data":"a735d4f93e2231ae2a788ee232093dfbb8748b09065788ca6cc6337170b33936"} Feb 02 14:52:10 crc kubenswrapper[4869]: I0202 14:52:10.344799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerStarted","Data":"21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266"} Feb 02 14:52:10 crc kubenswrapper[4869]: I0202 14:52:10.345418 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:10 crc kubenswrapper[4869]: I0202 14:52:10.370374 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podStartSLOduration=3.370346653 podStartE2EDuration="3.370346653s" podCreationTimestamp="2026-02-02 14:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:10.362196542 +0000 UTC m=+1132.006833312" watchObservedRunningTime="2026-02-02 14:52:10.370346653 +0000 UTC m=+1132.014983423" Feb 02 14:52:14 crc kubenswrapper[4869]: I0202 14:52:14.861179 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.251601 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.253284 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.278984 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.305291 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.305372 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.356985 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.358255 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.362425 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.364303 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.387234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.387357 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.387461 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.387540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.488930 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.489416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.489501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.489546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.490245 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.490432 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.514750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.519229 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.523342 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.525003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.528188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.528336 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.529104 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.529762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-72872" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.536300 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.574371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.591168 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.591334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.591392 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.612462 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.614028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.624394 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.636766 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.654388 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.697339 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.697675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.697733 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.697780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.710183 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.724725 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.725230 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.715164 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.727036 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.736079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.736190 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.738717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.785079 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.787687 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.798165 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.799627 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.829829 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.831872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.831953 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.832031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.832071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.832156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.832281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.834518 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.834526 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.850990 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.852578 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.859110 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.860417 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.862945 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.868021 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.934748 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.934816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.934854 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.934965 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.935936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.958608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.037942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.038159 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.042396 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.068727 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.074301 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.098783 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.141272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.160746 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.185849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.398514 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.405520 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzwcn" event={"ID":"66e52e3f-cffb-44c2-9532-d645fa630d61","Type":"ContainerStarted","Data":"c1ca2e36cdbb37e9d7c021194e66d30657f92800b5c11ae7fe9202fd45a062ad"} Feb 02 14:52:16 crc kubenswrapper[4869]: W0202 14:52:16.422005 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a91413a_aa7c_4564_bf72_53071981cd62.slice/crio-bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77 WatchSource:0}: Error finding container bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77: Status 404 returned error can't find the container with id bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77 Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.581019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.052588 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 14:52:17 crc kubenswrapper[4869]: W0202 14:52:17.053840 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe36a818_4a20_4330_ade7_225a479d7e98.slice/crio-23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a WatchSource:0}: Error finding container 23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a: Status 404 returned error can't find the container with id 23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.150370 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.158858 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.303134 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.454273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kp9g2" event={"ID":"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0","Type":"ContainerStarted","Data":"02f0152486f6d15e27ee638bd4a0ad31fa89aef01cbf65c375e9ea7c3754cb1c"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.456690 4869 generic.go:334] "Generic (PLEG): container finished" podID="8a91413a-aa7c-4564-bf72-53071981cd62" containerID="8ad30a46b6571b102d653acdd91c3117aa9caffad9f46651f8d10f3bce6d1da5" exitCode=0 Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.456765 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9bcf-account-create-update-pprmg" event={"ID":"8a91413a-aa7c-4564-bf72-53071981cd62","Type":"ContainerDied","Data":"8ad30a46b6571b102d653acdd91c3117aa9caffad9f46651f8d10f3bce6d1da5"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.456798 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9bcf-account-create-update-pprmg" event={"ID":"8a91413a-aa7c-4564-bf72-53071981cd62","Type":"ContainerStarted","Data":"bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.459426 4869 generic.go:334] "Generic (PLEG): container finished" podID="66e52e3f-cffb-44c2-9532-d645fa630d61" containerID="a67405c792b46e1c7a87b10db412f756b77b32607171121e6cfbf4745d19567f" exitCode=0 Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.459503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzwcn" event={"ID":"66e52e3f-cffb-44c2-9532-d645fa630d61","Type":"ContainerDied","Data":"a67405c792b46e1c7a87b10db412f756b77b32607171121e6cfbf4745d19567f"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.462855 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bznrb" event={"ID":"b5268e6d-82fe-45d8-a243-d37b326346a6","Type":"ContainerStarted","Data":"e0cb2f6956af5d713875e9a9977db1a357539fa9317755fad15a287086493ed9"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.464271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f93f-account-create-update-qbxcg" event={"ID":"6aa7f6b2-de14-408c-8960-662c2ab0e481","Type":"ContainerStarted","Data":"5a3eac8a14a3519fc3baa33a188a36940d29e94a2e52fef88f1631e6608a40a7"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.465479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zf6z" event={"ID":"2b3583d5-e064-4a64-89ba-a97a7fcc993d","Type":"ContainerStarted","Data":"00a4cfc7849d2f9ea55fa2dd3fb70b062afc95bf4b2bcbb1f6797199fd69f8e6"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.540139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2561-account-create-update-zwwnx" event={"ID":"be36a818-4a20-4330-ade7-225a479d7e98","Type":"ContainerStarted","Data":"23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.070188 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.188562 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.191349 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-4c4vl" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="dnsmasq-dns" containerID="cri-o://b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950" gracePeriod=10 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.485314 4869 generic.go:334] "Generic (PLEG): container finished" podID="be36a818-4a20-4330-ade7-225a479d7e98" containerID="bc23c4af30b56127451b57906851e79c3c56f83ff81cbe94961025e57448181c" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.485502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2561-account-create-update-zwwnx" event={"ID":"be36a818-4a20-4330-ade7-225a479d7e98","Type":"ContainerDied","Data":"bc23c4af30b56127451b57906851e79c3c56f83ff81cbe94961025e57448181c"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.491331 4869 generic.go:334] "Generic (PLEG): container finished" podID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" containerID="fd9a1056bb847e46dd277ee512ce8a86dedc30d17b4d1ccaa855457de2552b81" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.491863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kp9g2" event={"ID":"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0","Type":"ContainerDied","Data":"fd9a1056bb847e46dd277ee512ce8a86dedc30d17b4d1ccaa855457de2552b81"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.499489 4869 generic.go:334] "Generic (PLEG): container finished" podID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerID="b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.499661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerDied","Data":"b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.503637 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5268e6d-82fe-45d8-a243-d37b326346a6" containerID="213e1848995e356634b595c82a82047cb0a5c02652baad5bea2863f82f47bdbc" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.503733 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bznrb" event={"ID":"b5268e6d-82fe-45d8-a243-d37b326346a6","Type":"ContainerDied","Data":"213e1848995e356634b595c82a82047cb0a5c02652baad5bea2863f82f47bdbc"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.516393 4869 generic.go:334] "Generic (PLEG): container finished" podID="6aa7f6b2-de14-408c-8960-662c2ab0e481" containerID="59d9f27d8d1ae8627d4c79fa51d4258f445b3484686b6e2d609c49071e26d3ff" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.516745 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f93f-account-create-update-qbxcg" event={"ID":"6aa7f6b2-de14-408c-8960-662c2ab0e481","Type":"ContainerDied","Data":"59d9f27d8d1ae8627d4c79fa51d4258f445b3484686b6e2d609c49071e26d3ff"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.726825 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837503 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837670 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.848667 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt" (OuterVolumeSpecName: "kube-api-access-2s9zt") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "kube-api-access-2s9zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.903933 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.925160 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.939961 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.940007 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.940020 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.951121 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.003068 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.019544 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config" (OuterVolumeSpecName: "config") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.043402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") pod \"66e52e3f-cffb-44c2-9532-d645fa630d61\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.043522 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") pod \"66e52e3f-cffb-44c2-9532-d645fa630d61\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.044263 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.044288 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.051408 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66e52e3f-cffb-44c2-9532-d645fa630d61" (UID: "66e52e3f-cffb-44c2-9532-d645fa630d61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.051537 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl" (OuterVolumeSpecName: "kube-api-access-qfwnl") pod "66e52e3f-cffb-44c2-9532-d645fa630d61" (UID: "66e52e3f-cffb-44c2-9532-d645fa630d61"). InnerVolumeSpecName "kube-api-access-qfwnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.136007 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.146074 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.146114 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.247121 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") pod \"8a91413a-aa7c-4564-bf72-53071981cd62\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.247185 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") pod \"8a91413a-aa7c-4564-bf72-53071981cd62\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.249583 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a91413a-aa7c-4564-bf72-53071981cd62" (UID: "8a91413a-aa7c-4564-bf72-53071981cd62"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.255774 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc" (OuterVolumeSpecName: "kube-api-access-9v9wc") pod "8a91413a-aa7c-4564-bf72-53071981cd62" (UID: "8a91413a-aa7c-4564-bf72-53071981cd62"). InnerVolumeSpecName "kube-api-access-9v9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.349193 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.349245 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.530221 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerDied","Data":"ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8"} Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.530275 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.530283 4869 scope.go:117] "RemoveContainer" containerID="b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.536300 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.536315 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzwcn" event={"ID":"66e52e3f-cffb-44c2-9532-d645fa630d61","Type":"ContainerDied","Data":"c1ca2e36cdbb37e9d7c021194e66d30657f92800b5c11ae7fe9202fd45a062ad"} Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.536503 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ca2e36cdbb37e9d7c021194e66d30657f92800b5c11ae7fe9202fd45a062ad" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.538265 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9bcf-account-create-update-pprmg" event={"ID":"8a91413a-aa7c-4564-bf72-53071981cd62","Type":"ContainerDied","Data":"bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77"} Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.538312 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.538328 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.569043 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.579786 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:52:21 crc kubenswrapper[4869]: I0202 14:52:21.477520 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" path="/var/lib/kubelet/pods/54b21918-ca4b-429c-8a6e-dd4bb0240efd/volumes" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.114777 4869 scope.go:117] "RemoveContainer" containerID="d4bc95d2879e70b645a2e7e235f1fbdcdf5fe19a1ef7176a88d572c086b1c57b" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.329771 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.371278 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.383794 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.408813 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470026 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") pod \"be36a818-4a20-4330-ade7-225a479d7e98\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470154 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") pod \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470189 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") pod \"b5268e6d-82fe-45d8-a243-d37b326346a6\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") pod \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") pod \"be36a818-4a20-4330-ade7-225a479d7e98\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470473 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") pod \"b5268e6d-82fe-45d8-a243-d37b326346a6\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.471269 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" (UID: "dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.471353 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5268e6d-82fe-45d8-a243-d37b326346a6" (UID: "b5268e6d-82fe-45d8-a243-d37b326346a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.473620 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be36a818-4a20-4330-ade7-225a479d7e98" (UID: "be36a818-4a20-4330-ade7-225a479d7e98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.478048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z" (OuterVolumeSpecName: "kube-api-access-nj96z") pod "be36a818-4a20-4330-ade7-225a479d7e98" (UID: "be36a818-4a20-4330-ade7-225a479d7e98"). InnerVolumeSpecName "kube-api-access-nj96z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.480333 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl" (OuterVolumeSpecName: "kube-api-access-vbxxl") pod "b5268e6d-82fe-45d8-a243-d37b326346a6" (UID: "b5268e6d-82fe-45d8-a243-d37b326346a6"). InnerVolumeSpecName "kube-api-access-vbxxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.488374 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms" (OuterVolumeSpecName: "kube-api-access-dgfms") pod "dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" (UID: "dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0"). InnerVolumeSpecName "kube-api-access-dgfms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") pod \"6aa7f6b2-de14-408c-8960-662c2ab0e481\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577354 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") pod \"6aa7f6b2-de14-408c-8960-662c2ab0e481\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577822 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6aa7f6b2-de14-408c-8960-662c2ab0e481" (UID: "6aa7f6b2-de14-408c-8960-662c2ab0e481"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577967 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577998 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578016 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578028 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578039 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578050 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578062 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.580957 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4" (OuterVolumeSpecName: "kube-api-access-zm6b4") pod "6aa7f6b2-de14-408c-8960-662c2ab0e481" (UID: "6aa7f6b2-de14-408c-8960-662c2ab0e481"). InnerVolumeSpecName "kube-api-access-zm6b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.591060 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kp9g2" event={"ID":"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0","Type":"ContainerDied","Data":"02f0152486f6d15e27ee638bd4a0ad31fa89aef01cbf65c375e9ea7c3754cb1c"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.591099 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.591111 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f0152486f6d15e27ee638bd4a0ad31fa89aef01cbf65c375e9ea7c3754cb1c" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.598552 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.598835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bznrb" event={"ID":"b5268e6d-82fe-45d8-a243-d37b326346a6","Type":"ContainerDied","Data":"e0cb2f6956af5d713875e9a9977db1a357539fa9317755fad15a287086493ed9"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.598882 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0cb2f6956af5d713875e9a9977db1a357539fa9317755fad15a287086493ed9" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.602442 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.602442 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f93f-account-create-update-qbxcg" event={"ID":"6aa7f6b2-de14-408c-8960-662c2ab0e481","Type":"ContainerDied","Data":"5a3eac8a14a3519fc3baa33a188a36940d29e94a2e52fef88f1631e6608a40a7"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.602575 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3eac8a14a3519fc3baa33a188a36940d29e94a2e52fef88f1631e6608a40a7" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.606136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zf6z" event={"ID":"2b3583d5-e064-4a64-89ba-a97a7fcc993d","Type":"ContainerStarted","Data":"cecab4e9b99e25e3a70710711bfe9446ff16abe3509be2bbfedce73c81eaeb89"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.610242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2561-account-create-update-zwwnx" event={"ID":"be36a818-4a20-4330-ade7-225a479d7e98","Type":"ContainerDied","Data":"23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.610311 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.610393 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.628662 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6zf6z" podStartSLOduration=2.077299385 podStartE2EDuration="8.628612776s" podCreationTimestamp="2026-02-02 14:52:15 +0000 UTC" firstStartedPulling="2026-02-02 14:52:16.641895427 +0000 UTC m=+1138.286532197" lastFinishedPulling="2026-02-02 14:52:23.193208818 +0000 UTC m=+1144.837845588" observedRunningTime="2026-02-02 14:52:23.627537699 +0000 UTC m=+1145.272174479" watchObservedRunningTime="2026-02-02 14:52:23.628612776 +0000 UTC m=+1145.273249556" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.680586 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:27 crc kubenswrapper[4869]: I0202 14:52:27.652691 4869 generic.go:334] "Generic (PLEG): container finished" podID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" containerID="cecab4e9b99e25e3a70710711bfe9446ff16abe3509be2bbfedce73c81eaeb89" exitCode=0 Feb 02 14:52:27 crc kubenswrapper[4869]: I0202 14:52:27.652769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zf6z" event={"ID":"2b3583d5-e064-4a64-89ba-a97a7fcc993d","Type":"ContainerDied","Data":"cecab4e9b99e25e3a70710711bfe9446ff16abe3509be2bbfedce73c81eaeb89"} Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.052024 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.197530 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") pod \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.197674 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") pod \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.197710 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") pod \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.205489 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v" (OuterVolumeSpecName: "kube-api-access-df86v") pod "2b3583d5-e064-4a64-89ba-a97a7fcc993d" (UID: "2b3583d5-e064-4a64-89ba-a97a7fcc993d"). InnerVolumeSpecName "kube-api-access-df86v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.228455 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b3583d5-e064-4a64-89ba-a97a7fcc993d" (UID: "2b3583d5-e064-4a64-89ba-a97a7fcc993d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.267207 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data" (OuterVolumeSpecName: "config-data") pod "2b3583d5-e064-4a64-89ba-a97a7fcc993d" (UID: "2b3583d5-e064-4a64-89ba-a97a7fcc993d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.300504 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.300616 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.300633 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.677460 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zf6z" event={"ID":"2b3583d5-e064-4a64-89ba-a97a7fcc993d","Type":"ContainerDied","Data":"00a4cfc7849d2f9ea55fa2dd3fb70b062afc95bf4b2bcbb1f6797199fd69f8e6"} Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.678014 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a4cfc7849d2f9ea55fa2dd3fb70b062afc95bf4b2bcbb1f6797199fd69f8e6" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.677536 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.136489 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137020 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be36a818-4a20-4330-ade7-225a479d7e98" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137040 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="be36a818-4a20-4330-ade7-225a479d7e98" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137048 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137055 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137072 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a91413a-aa7c-4564-bf72-53071981cd62" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137086 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a91413a-aa7c-4564-bf72-53071981cd62" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137107 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e52e3f-cffb-44c2-9532-d645fa630d61" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137117 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e52e3f-cffb-44c2-9532-d645fa630d61" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137140 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="init" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137148 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="init" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137163 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="dnsmasq-dns" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137172 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="dnsmasq-dns" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137190 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa7f6b2-de14-408c-8960-662c2ab0e481" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137199 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa7f6b2-de14-408c-8960-662c2ab0e481" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137214 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5268e6d-82fe-45d8-a243-d37b326346a6" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137223 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5268e6d-82fe-45d8-a243-d37b326346a6" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137239 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" containerName="keystone-db-sync" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137249 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" containerName="keystone-db-sync" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137414 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="dnsmasq-dns" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137427 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137440 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a91413a-aa7c-4564-bf72-53071981cd62" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137448 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e52e3f-cffb-44c2-9532-d645fa630d61" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137455 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5268e6d-82fe-45d8-a243-d37b326346a6" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137464 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="be36a818-4a20-4330-ade7-225a479d7e98" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137474 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" containerName="keystone-db-sync" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137480 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa7f6b2-de14-408c-8960-662c2ab0e481" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.138220 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.153933 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.154933 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-72872" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.155217 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.155273 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.155376 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.156894 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.159412 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.188322 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.198342 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.219844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.219922 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.220172 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.220313 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.220467 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.220546 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322443 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322492 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322527 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322568 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322620 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322703 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322732 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322831 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.340305 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.356861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.357409 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.362356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.370809 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.388677 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.425785 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.425890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.425938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.426032 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.426075 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.426974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.431189 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.437894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.443993 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.473901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.482408 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.490508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.574322 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.576232 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.576323 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.598864 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.599360 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.599738 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-92dp9" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.637607 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.638893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.680847 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.681213 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.681372 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9hgj2" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.720949 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738495 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738525 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738617 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.793997 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.842880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.842962 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843107 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843160 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843195 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843223 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843284 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.857039 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.870812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.874442 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.874608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.875731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.876369 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.881537 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.881679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.896671 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.905941 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.910864 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.913435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.919992 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.927608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q447q" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.930854 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.939092 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.942024 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pg4t9" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.942263 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.942324 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.025773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031657 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031747 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031836 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031850 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031867 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.069023 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.095594 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.107092 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.112702 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.113395 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.115512 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.120121 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2d6ss" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.122541 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.127146 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.134890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139168 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139324 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139355 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139450 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139514 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139552 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139731 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139854 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139887 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139946 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.140027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.140061 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.140186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.140278 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.148161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.148412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.161051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.166291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.167965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.183957 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.196826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.198950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242633 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242736 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242846 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242866 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242920 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242959 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242983 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.244902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.245510 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.245788 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.247474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.248362 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.252766 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.255456 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.266887 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.268042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.269161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.274146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.274736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.285641 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.367513 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.378044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.504475 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.517450 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: W0202 14:52:31.533314 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8205ba1c_9c1b_4d76_83f5_2f30dba11533.slice/crio-786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce WatchSource:0}: Error finding container 786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce: Status 404 returned error can't find the container with id 786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.571412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.597795 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.777716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" event={"ID":"8205ba1c-9c1b-4d76-83f5-2f30dba11533","Type":"ContainerStarted","Data":"786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce"} Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.782215 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f4vkc" event={"ID":"02317eeb-3381-4883-b345-2ec84b402aae","Type":"ContainerStarted","Data":"180b224a231cda3b4ae69afc28110045d922067babdece8f42149ecb73f011f0"} Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.915671 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.051628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.214410 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.237101 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.415445 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.426803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.797206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q447q" event={"ID":"2a5f9f47-1ba0-4d37-8597-874a62d9045e","Type":"ContainerStarted","Data":"91133cd950cbaf0a2fd654c7a3e7af936c27a7b6526630fb20d70ac6c178f469"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.824159 4869 generic.go:334] "Generic (PLEG): container finished" podID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" containerID="165e6d41cdbda9554672f48bfbf6dae797c409b00fe7e4b925b58548cd537f9b" exitCode=0 Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.824240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" event={"ID":"8205ba1c-9c1b-4d76-83f5-2f30dba11533","Type":"ContainerDied","Data":"165e6d41cdbda9554672f48bfbf6dae797c409b00fe7e4b925b58548cd537f9b"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.873542 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hz9pj" event={"ID":"367199b6-3340-454e-acc5-478f9b35b2df","Type":"ContainerStarted","Data":"8bb80d715d8f5ab6d26df204394e8bf93606b57fc5408d917fc1dee2b0e16af2"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.873620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hz9pj" event={"ID":"367199b6-3340-454e-acc5-478f9b35b2df","Type":"ContainerStarted","Data":"7dd80f0858d5d331b1948ea1170d5424dc4e4ccf69aa8a84169b4800d0e4fc13"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.913787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f4vkc" event={"ID":"02317eeb-3381-4883-b345-2ec84b402aae","Type":"ContainerStarted","Data":"078449dfe9468d87dcfb0be258a6b0c80818d1519435a1c1a98664100d03e287"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.968107 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerStarted","Data":"9d20104835b08533de4169d71a96c0b24b6f27636df1686a4f2724353347f5f4"} Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.039544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s2dwg" event={"ID":"f0e63b99-6d06-44ea-a061-b9f79551126a","Type":"ContainerStarted","Data":"86f6ff04cbc086ccbfd2e84539b1d96a49f77aa4c0aa0c0898599df70d3ebe0a"} Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.055389 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hz9pj" podStartSLOduration=3.055346829 podStartE2EDuration="3.055346829s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:32.913468876 +0000 UTC m=+1154.558105646" watchObservedRunningTime="2026-02-02 14:52:33.055346829 +0000 UTC m=+1154.699983599" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.104123 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f4vkc" podStartSLOduration=3.104095006 podStartE2EDuration="3.104095006s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:33.030811431 +0000 UTC m=+1154.675448221" watchObservedRunningTime="2026-02-02 14:52:33.104095006 +0000 UTC m=+1154.748731776" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.104881 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4fqzr" event={"ID":"818ee387-cf73-45bc-8925-c234d5fd8ee3","Type":"ContainerStarted","Data":"ee7fd35cc885ef9baea8bed6be792f654b41db4b87960643e8aaaa20fc9891a4"} Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.122841 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"9a54c86921d5b0ef544bfd0a64a504e7bbbc4ab3d0006b551a598232317f2a2b"} Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.131809 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.600662 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723712 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723877 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.736257 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r" (OuterVolumeSpecName: "kube-api-access-b5v5r") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "kube-api-access-b5v5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.771517 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config" (OuterVolumeSpecName: "config") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.772346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.785031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.804827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828221 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828260 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828275 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828284 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828296 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.150899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" event={"ID":"8205ba1c-9c1b-4d76-83f5-2f30dba11533","Type":"ContainerDied","Data":"786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce"} Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.151404 4869 scope.go:117] "RemoveContainer" containerID="165e6d41cdbda9554672f48bfbf6dae797c409b00fe7e4b925b58548cd537f9b" Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.151257 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.161413 4869 generic.go:334] "Generic (PLEG): container finished" podID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerID="5b057f5c2556a8f58e337485429c58bd6088b4c173270d5455938195918cef0b" exitCode=0 Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.163532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerDied","Data":"5b057f5c2556a8f58e337485429c58bd6088b4c173270d5455938195918cef0b"} Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.284259 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.298967 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:35 crc kubenswrapper[4869]: I0202 14:52:35.190053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerStarted","Data":"a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974"} Feb 02 14:52:35 crc kubenswrapper[4869]: I0202 14:52:35.190509 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:35 crc kubenswrapper[4869]: I0202 14:52:35.474753 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" path="/var/lib/kubelet/pods/8205ba1c-9c1b-4d76-83f5-2f30dba11533/volumes" Feb 02 14:52:38 crc kubenswrapper[4869]: I0202 14:52:38.229302 4869 generic.go:334] "Generic (PLEG): container finished" podID="02317eeb-3381-4883-b345-2ec84b402aae" containerID="078449dfe9468d87dcfb0be258a6b0c80818d1519435a1c1a98664100d03e287" exitCode=0 Feb 02 14:52:38 crc kubenswrapper[4869]: I0202 14:52:38.229384 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f4vkc" event={"ID":"02317eeb-3381-4883-b345-2ec84b402aae","Type":"ContainerDied","Data":"078449dfe9468d87dcfb0be258a6b0c80818d1519435a1c1a98664100d03e287"} Feb 02 14:52:38 crc kubenswrapper[4869]: I0202 14:52:38.258297 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" podStartSLOduration=8.258265819 podStartE2EDuration="8.258265819s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:35.215278278 +0000 UTC m=+1156.859915078" watchObservedRunningTime="2026-02-02 14:52:38.258265819 +0000 UTC m=+1159.902902589" Feb 02 14:52:41 crc kubenswrapper[4869]: I0202 14:52:41.519684 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:41 crc kubenswrapper[4869]: I0202 14:52:41.627062 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:41 crc kubenswrapper[4869]: I0202 14:52:41.627417 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" containerID="cri-o://21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266" gracePeriod=10 Feb 02 14:52:42 crc kubenswrapper[4869]: I0202 14:52:42.283718 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerID="21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266" exitCode=0 Feb 02 14:52:42 crc kubenswrapper[4869]: I0202 14:52:42.283784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerDied","Data":"21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266"} Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.065577 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.526820 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.711791 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.711881 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.712191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.712249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.712383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.712454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.725657 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts" (OuterVolumeSpecName: "scripts") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.734888 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5" (OuterVolumeSpecName: "kube-api-access-txtq5") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "kube-api-access-txtq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.750242 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.766184 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.808347 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data" (OuterVolumeSpecName: "config-data") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814135 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814699 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814755 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814767 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814778 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814790 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814799 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.306187 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f4vkc" event={"ID":"02317eeb-3381-4883-b345-2ec84b402aae","Type":"ContainerDied","Data":"180b224a231cda3b4ae69afc28110045d922067babdece8f42149ecb73f011f0"} Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.306646 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="180b224a231cda3b4ae69afc28110045d922067babdece8f42149ecb73f011f0" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.306434 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.744700 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.762807 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844117 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 14:52:44 crc kubenswrapper[4869]: E0202 14:52:44.844536 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02317eeb-3381-4883-b345-2ec84b402aae" containerName="keystone-bootstrap" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844558 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02317eeb-3381-4883-b345-2ec84b402aae" containerName="keystone-bootstrap" Feb 02 14:52:44 crc kubenswrapper[4869]: E0202 14:52:44.844599 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" containerName="init" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844607 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" containerName="init" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844785 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="02317eeb-3381-4883-b345-2ec84b402aae" containerName="keystone-bootstrap" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844817 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" containerName="init" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.845457 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.848600 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.849057 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.849121 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.849071 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.849520 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-72872" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.863401 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.041881 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042684 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042779 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.144969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145241 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145266 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.151102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.152001 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.155283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.168570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.170124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.170439 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.220699 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.304240 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.304308 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.476781 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02317eeb-3381-4883-b345-2ec84b402aae" path="/var/lib/kubelet/pods/02317eeb-3381-4883-b345-2ec84b402aae/volumes" Feb 02 14:52:48 crc kubenswrapper[4869]: I0202 14:52:48.064965 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 02 14:52:53 crc kubenswrapper[4869]: I0202 14:52:53.065397 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 02 14:52:53 crc kubenswrapper[4869]: I0202 14:52:53.066289 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:56 crc kubenswrapper[4869]: E0202 14:52:56.722495 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 14:52:56 crc kubenswrapper[4869]: E0202 14:52:56.723394 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9jzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-s2dwg_openstack(f0e63b99-6d06-44ea-a061-b9f79551126a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:52:56 crc kubenswrapper[4869]: E0202 14:52:56.724994 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-s2dwg" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.040245 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.122467 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.125502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.125732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.125795 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.125849 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.136519 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw" (OuterVolumeSpecName: "kube-api-access-zw9pw") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "kube-api-access-zw9pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.202120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config" (OuterVolumeSpecName: "config") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.228604 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.228954 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.228988 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.229001 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.233976 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.250022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.276961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.330732 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.330774 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.428242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxtsl" event={"ID":"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b","Type":"ContainerStarted","Data":"7ec50d3c95d3d2c9d96e976502e27bc356d7e820fe0c2796a704965f259c6dc6"} Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.432125 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4fqzr" event={"ID":"818ee387-cf73-45bc-8925-c234d5fd8ee3","Type":"ContainerStarted","Data":"8962be87127b6e0d3f3ece55fe53f40715482971642999f7d7b74c30b09eeea6"} Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.435528 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49"} Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.438979 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerDied","Data":"a735d4f93e2231ae2a788ee232093dfbb8748b09065788ca6cc6337170b33936"} Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.439011 4869 scope.go:117] "RemoveContainer" containerID="21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.439146 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.449583 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4fqzr" podStartSLOduration=3.206805727 podStartE2EDuration="27.449555836s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="2026-02-02 14:52:32.455526349 +0000 UTC m=+1154.100163119" lastFinishedPulling="2026-02-02 14:52:56.698276458 +0000 UTC m=+1178.342913228" observedRunningTime="2026-02-02 14:52:57.447232219 +0000 UTC m=+1179.091868989" watchObservedRunningTime="2026-02-02 14:52:57.449555836 +0000 UTC m=+1179.094192606" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.454517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q447q" event={"ID":"2a5f9f47-1ba0-4d37-8597-874a62d9045e","Type":"ContainerStarted","Data":"da76a4a0a2fd91d41e48fb82a3fd0ddaf3e6b22ad0d146b95f9759bc6eb3ab36"} Feb 02 14:52:57 crc kubenswrapper[4869]: E0202 14:52:57.455850 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-s2dwg" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.508681 4869 scope.go:117] "RemoveContainer" containerID="bc9dde5f802202af7a85f0bef2eac6285904a7c6caf12c1643635106506e9002" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.524481 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-q447q" podStartSLOduration=3.121045594 podStartE2EDuration="27.52444974s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="2026-02-02 14:52:32.269130095 +0000 UTC m=+1153.913766865" lastFinishedPulling="2026-02-02 14:52:56.672534241 +0000 UTC m=+1178.317171011" observedRunningTime="2026-02-02 14:52:57.495261858 +0000 UTC m=+1179.139898618" watchObservedRunningTime="2026-02-02 14:52:57.52444974 +0000 UTC m=+1179.169086510" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.584574 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.596863 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:58 crc kubenswrapper[4869]: I0202 14:52:58.479468 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxtsl" event={"ID":"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b","Type":"ContainerStarted","Data":"f5f3adb22514a5728bdaa407debd5241eb6b5669db2e00b862292c4751c58656"} Feb 02 14:52:58 crc kubenswrapper[4869]: I0202 14:52:58.503168 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zxtsl" podStartSLOduration=14.503145098 podStartE2EDuration="14.503145098s" podCreationTimestamp="2026-02-02 14:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:58.498160325 +0000 UTC m=+1180.142797095" watchObservedRunningTime="2026-02-02 14:52:58.503145098 +0000 UTC m=+1180.147781858" Feb 02 14:52:59 crc kubenswrapper[4869]: I0202 14:52:59.475956 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" path="/var/lib/kubelet/pods/cc6051dd-8fa8-4c0b-bd98-9d180754d64a/volumes" Feb 02 14:52:59 crc kubenswrapper[4869]: I0202 14:52:59.498742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121"} Feb 02 14:52:59 crc kubenswrapper[4869]: I0202 14:52:59.500645 4869 generic.go:334] "Generic (PLEG): container finished" podID="367199b6-3340-454e-acc5-478f9b35b2df" containerID="8bb80d715d8f5ab6d26df204394e8bf93606b57fc5408d917fc1dee2b0e16af2" exitCode=0 Feb 02 14:52:59 crc kubenswrapper[4869]: I0202 14:52:59.502212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hz9pj" event={"ID":"367199b6-3340-454e-acc5-478f9b35b2df","Type":"ContainerDied","Data":"8bb80d715d8f5ab6d26df204394e8bf93606b57fc5408d917fc1dee2b0e16af2"} Feb 02 14:53:00 crc kubenswrapper[4869]: I0202 14:53:00.988115 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.004660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") pod \"367199b6-3340-454e-acc5-478f9b35b2df\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.004750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") pod \"367199b6-3340-454e-acc5-478f9b35b2df\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.004936 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") pod \"367199b6-3340-454e-acc5-478f9b35b2df\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.074633 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt" (OuterVolumeSpecName: "kube-api-access-tdbnt") pod "367199b6-3340-454e-acc5-478f9b35b2df" (UID: "367199b6-3340-454e-acc5-478f9b35b2df"). InnerVolumeSpecName "kube-api-access-tdbnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.095107 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config" (OuterVolumeSpecName: "config") pod "367199b6-3340-454e-acc5-478f9b35b2df" (UID: "367199b6-3340-454e-acc5-478f9b35b2df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.139207 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.139508 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.172548 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "367199b6-3340-454e-acc5-478f9b35b2df" (UID: "367199b6-3340-454e-acc5-478f9b35b2df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.242767 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.539818 4869 generic.go:334] "Generic (PLEG): container finished" podID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" containerID="da76a4a0a2fd91d41e48fb82a3fd0ddaf3e6b22ad0d146b95f9759bc6eb3ab36" exitCode=0 Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.539989 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q447q" event={"ID":"2a5f9f47-1ba0-4d37-8597-874a62d9045e","Type":"ContainerDied","Data":"da76a4a0a2fd91d41e48fb82a3fd0ddaf3e6b22ad0d146b95f9759bc6eb3ab36"} Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.546262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hz9pj" event={"ID":"367199b6-3340-454e-acc5-478f9b35b2df","Type":"ContainerDied","Data":"7dd80f0858d5d331b1948ea1170d5424dc4e4ccf69aa8a84169b4800d0e4fc13"} Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.546326 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dd80f0858d5d331b1948ea1170d5424dc4e4ccf69aa8a84169b4800d0e4fc13" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.546431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.551414 4869 generic.go:334] "Generic (PLEG): container finished" podID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" containerID="f5f3adb22514a5728bdaa407debd5241eb6b5669db2e00b862292c4751c58656" exitCode=0 Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.551484 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxtsl" event={"ID":"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b","Type":"ContainerDied","Data":"f5f3adb22514a5728bdaa407debd5241eb6b5669db2e00b862292c4751c58656"} Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.845870 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:01 crc kubenswrapper[4869]: E0202 14:53:01.848329 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.849514 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" Feb 02 14:53:01 crc kubenswrapper[4869]: E0202 14:53:01.851996 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="init" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.852074 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="init" Feb 02 14:53:01 crc kubenswrapper[4869]: E0202 14:53:01.852121 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367199b6-3340-454e-acc5-478f9b35b2df" containerName="neutron-db-sync" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.852163 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="367199b6-3340-454e-acc5-478f9b35b2df" containerName="neutron-db-sync" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.852894 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="367199b6-3340-454e-acc5-478f9b35b2df" containerName="neutron-db-sync" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.852980 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.854502 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.858207 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.858243 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.858299 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.858322 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.859065 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.901144 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.903154 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.908659 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9hgj2" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.915267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.915571 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.915761 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.925590 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.953374 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961494 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961615 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961662 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961709 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961766 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.972104 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.975960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.977290 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.977588 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.990001 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064725 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.071267 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.072570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.073531 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.080900 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.091932 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.229897 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.252783 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.572484 4869 generic.go:334] "Generic (PLEG): container finished" podID="818ee387-cf73-45bc-8925-c234d5fd8ee3" containerID="8962be87127b6e0d3f3ece55fe53f40715482971642999f7d7b74c30b09eeea6" exitCode=0 Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.572755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4fqzr" event={"ID":"818ee387-cf73-45bc-8925-c234d5fd8ee3","Type":"ContainerDied","Data":"8962be87127b6e0d3f3ece55fe53f40715482971642999f7d7b74c30b09eeea6"} Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.400925 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.410131 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.412936 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.418634 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.441768 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529859 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529879 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529944 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.530009 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.530050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.631361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.631909 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.631982 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.632031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.632066 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.632105 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.632124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.639990 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.640045 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.642259 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.644706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.654275 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.654515 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.655064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.744225 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.628393 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q447q" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.628598 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q447q" event={"ID":"2a5f9f47-1ba0-4d37-8597-874a62d9045e","Type":"ContainerDied","Data":"91133cd950cbaf0a2fd654c7a3e7af936c27a7b6526630fb20d70ac6c178f469"} Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.629138 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91133cd950cbaf0a2fd654c7a3e7af936c27a7b6526630fb20d70ac6c178f469" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.630797 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxtsl" event={"ID":"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b","Type":"ContainerDied","Data":"7ec50d3c95d3d2c9d96e976502e27bc356d7e820fe0c2796a704965f259c6dc6"} Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.630822 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ec50d3c95d3d2c9d96e976502e27bc356d7e820fe0c2796a704965f259c6dc6" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.632174 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4fqzr" event={"ID":"818ee387-cf73-45bc-8925-c234d5fd8ee3","Type":"ContainerDied","Data":"ee7fd35cc885ef9baea8bed6be792f654b41db4b87960643e8aaaa20fc9891a4"} Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.632210 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7fd35cc885ef9baea8bed6be792f654b41db4b87960643e8aaaa20fc9891a4" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.663007 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.685609 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.776964 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777024 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777056 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777109 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") pod \"818ee387-cf73-45bc-8925-c234d5fd8ee3\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777169 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777211 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777233 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777256 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777380 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777442 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777490 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") pod \"818ee387-cf73-45bc-8925-c234d5fd8ee3\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777513 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") pod \"818ee387-cf73-45bc-8925-c234d5fd8ee3\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.778999 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs" (OuterVolumeSpecName: "logs") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.785577 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4" (OuterVolumeSpecName: "kube-api-access-5xlk4") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "kube-api-access-5xlk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.785668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.786820 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "818ee387-cf73-45bc-8925-c234d5fd8ee3" (UID: "818ee387-cf73-45bc-8925-c234d5fd8ee3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.787428 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.787512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8" (OuterVolumeSpecName: "kube-api-access-f5mg8") pod "818ee387-cf73-45bc-8925-c234d5fd8ee3" (UID: "818ee387-cf73-45bc-8925-c234d5fd8ee3"). InnerVolumeSpecName "kube-api-access-f5mg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.788094 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl" (OuterVolumeSpecName: "kube-api-access-l85sl") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "kube-api-access-l85sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.791227 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts" (OuterVolumeSpecName: "scripts") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.810619 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts" (OuterVolumeSpecName: "scripts") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.835356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.835562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "818ee387-cf73-45bc-8925-c234d5fd8ee3" (UID: "818ee387-cf73-45bc-8925-c234d5fd8ee3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.835975 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.836853 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data" (OuterVolumeSpecName: "config-data") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.840131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data" (OuterVolumeSpecName: "config-data") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.880901 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.880990 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881027 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881039 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881048 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881057 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881066 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881077 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881106 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881118 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881127 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881135 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881145 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881156 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.114002 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:07 crc kubenswrapper[4869]: W0202 14:53:07.124378 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47cb4795_faf4_4845_8f4c_3675b5613437.slice/crio-0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c WatchSource:0}: Error finding container 0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c: Status 404 returned error can't find the container with id 0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.507361 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:07 crc kubenswrapper[4869]: W0202 14:53:07.534224 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb918eb2a_3cab_422f_ba7d_f06c4ec21ef4.slice/crio-120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791 WatchSource:0}: Error finding container 120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791: Status 404 returned error can't find the container with id 120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791 Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.641497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerStarted","Data":"120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791"} Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.643683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb"} Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645595 4869 generic.go:334] "Generic (PLEG): container finished" podID="47cb4795-faf4-4845-8f4c-3675b5613437" containerID="419d84c102f4f60e2c9ce52715ebe01d27cf44677cf9646b669ee52aa5fb04bc" exitCode=0 Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645685 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645697 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q447q" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerDied","Data":"419d84c102f4f60e2c9ce52715ebe01d27cf44677cf9646b669ee52aa5fb04bc"} Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645778 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerStarted","Data":"0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c"} Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645807 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.785202 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:07 crc kubenswrapper[4869]: E0202 14:53:07.785720 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" containerName="placement-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.785736 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" containerName="placement-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: E0202 14:53:07.785756 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" containerName="keystone-bootstrap" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.785764 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" containerName="keystone-bootstrap" Feb 02 14:53:07 crc kubenswrapper[4869]: E0202 14:53:07.785785 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818ee387-cf73-45bc-8925-c234d5fd8ee3" containerName="barbican-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.785794 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="818ee387-cf73-45bc-8925-c234d5fd8ee3" containerName="barbican-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.786083 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="818ee387-cf73-45bc-8925-c234d5fd8ee3" containerName="barbican-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.786102 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" containerName="keystone-bootstrap" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.786110 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" containerName="placement-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.787212 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.793641 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pg4t9" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.794148 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.794474 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.794538 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.794773 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.797736 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.901760 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-575599577-dmndq"] Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.903396 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910421 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910477 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910541 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910603 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.911132 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-72872" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.911335 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.911564 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.911718 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.925084 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.925348 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.934995 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-575599577-dmndq"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-combined-ca-bundle\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012735 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-config-data\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012784 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-scripts\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012822 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jzc\" (UniqueName: \"kubernetes.io/projected/fc4c6770-5954-4777-8c4f-47397d045008-kube-api-access-h8jzc\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012882 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012999 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-fernet-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013042 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-credential-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013165 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-internal-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013191 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-public-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013291 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.018281 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.033734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.039800 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.051590 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.054689 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.057236 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.057694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.058514 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.058746 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.070449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2d6ss" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.070775 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.070971 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.075045 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.114759 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121146 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-config-data\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-scripts\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jzc\" (UniqueName: \"kubernetes.io/projected/fc4c6770-5954-4777-8c4f-47397d045008-kube-api-access-h8jzc\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-fernet-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121445 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-credential-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-internal-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-public-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-combined-ca-bundle\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.132231 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.134208 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.148895 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.149714 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-combined-ca-bundle\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.150478 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-public-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.198826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-config-data\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.199314 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-internal-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.199748 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-fernet-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.201403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-credential-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.202008 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-scripts\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.209722 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jzc\" (UniqueName: \"kubernetes.io/projected/fc4c6770-5954-4777-8c4f-47397d045008-kube-api-access-h8jzc\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.224412 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.229877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.229947 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.229980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230092 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230133 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230158 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230183 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230216 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.286761 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.295123 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.369889 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.370209 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.370393 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.370545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.370854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.371130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.371899 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.371977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.372012 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.372047 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.379053 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.386032 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.387821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.387934 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.390229 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.401413 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.401719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.410394 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.410509 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.421480 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.432031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.435831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.450626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.488392 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d7f6679db-zbdxv"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.490640 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.507042 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-675f9657dc-6qw7m"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.529673 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d7f6679db-zbdxv"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.529800 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.545013 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-675f9657dc-6qw7m"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.548524 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.562006 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.563817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.567436 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.584765 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585109 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-logs\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585288 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585371 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data-custom\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585438 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585562 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-combined-ca-bundle\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585987 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnt9\" (UniqueName: \"kubernetes.io/projected/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-kube-api-access-kbnt9\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.590478 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.693202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data-custom\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.693816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.693898 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data-custom\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.693976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694006 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxk7g\" (UniqueName: \"kubernetes.io/projected/18463ac0-a171-4ae0-9201-8df3d574eb70-kube-api-access-dxk7g\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694165 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-combined-ca-bundle\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694348 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-combined-ca-bundle\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694586 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbnt9\" (UniqueName: \"kubernetes.io/projected/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-kube-api-access-kbnt9\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694766 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-logs\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694891 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18463ac0-a171-4ae0-9201-8df3d574eb70-logs\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.697983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.702472 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-logs\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.703154 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.705272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.707415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.709321 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.714222 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.720843 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data-custom\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.727445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerStarted","Data":"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d"} Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.727535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerStarted","Data":"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a"} Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.729601 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.737778 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.738784 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-combined-ca-bundle\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.745301 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.749172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerStarted","Data":"571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846"} Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.749373 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="dnsmasq-dns" containerID="cri-o://571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846" gracePeriod=10 Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.749753 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.772212 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbnt9\" (UniqueName: \"kubernetes.io/projected/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-kube-api-access-kbnt9\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.774439 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bb87b4954-l5h9p" podStartSLOduration=7.774414107 podStartE2EDuration="7.774414107s" podCreationTimestamp="2026-02-02 14:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:08.766789498 +0000 UTC m=+1190.411426278" watchObservedRunningTime="2026-02-02 14:53:08.774414107 +0000 UTC m=+1190.419050877" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.799862 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18463ac0-a171-4ae0-9201-8df3d574eb70-logs\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.800058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data-custom\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.800150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.800207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxk7g\" (UniqueName: \"kubernetes.io/projected/18463ac0-a171-4ae0-9201-8df3d574eb70-kube-api-access-dxk7g\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-combined-ca-bundle\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802447 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802496 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.803368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18463ac0-a171-4ae0-9201-8df3d574eb70-logs\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.804607 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" podStartSLOduration=7.804577233 podStartE2EDuration="7.804577233s" podCreationTimestamp="2026-02-02 14:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:08.795680733 +0000 UTC m=+1190.440317503" watchObservedRunningTime="2026-02-02 14:53:08.804577233 +0000 UTC m=+1190.449214003" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.805239 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.833125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.834711 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data-custom\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.835816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-combined-ca-bundle\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.839956 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.870508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.871415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.872224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxk7g\" (UniqueName: \"kubernetes.io/projected/18463ac0-a171-4ae0-9201-8df3d574eb70-kube-api-access-dxk7g\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.885335 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.892490 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.928518 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.948753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.031414 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.120594 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.746311 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-575599577-dmndq"] Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.794624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerStarted","Data":"f71f18fd5c51bc2ff8e4203c7e7213ae442d57834261ba22fc6581334d9a1f73"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.839949 4869 generic.go:334] "Generic (PLEG): container finished" podID="47cb4795-faf4-4845-8f4c-3675b5613437" containerID="571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846" exitCode=0 Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.840042 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerDied","Data":"571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.853374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-575599577-dmndq" event={"ID":"fc4c6770-5954-4777-8c4f-47397d045008","Type":"ContainerStarted","Data":"cbbd11885d2dd89a0ee90b2accf8bc63a4b6150bcca43f03dd770a7c6cccf327"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.856701 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.866629 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerStarted","Data":"a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.866683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerStarted","Data":"45f00cd48b456ba32635e74b444d036ced51d5190a5131b65618e8664fdb1787"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.947632 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.947762 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.947980 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.948013 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.948081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.964256 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9" (OuterVolumeSpecName: "kube-api-access-qvfx9") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "kube-api-access-qvfx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.008764 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.040707 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.051120 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.078029 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bbd64cf97-7t5h5"] Feb 02 14:53:10 crc kubenswrapper[4869]: E0202 14:53:10.078616 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="dnsmasq-dns" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.078632 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="dnsmasq-dns" Feb 02 14:53:10 crc kubenswrapper[4869]: E0202 14:53:10.078650 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="init" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.078658 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="init" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.078980 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="dnsmasq-dns" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.081538 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.082428 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b3a4838_a42e_4ff4_a4b2_7dd079089a42.slice/crio-eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb WatchSource:0}: Error finding container eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb: Status 404 returned error can't find the container with id eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.088417 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bbd64cf97-7t5h5"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.144164 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.205959 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1dcc76_d41e_4492_95d0_dcbb0b1254b4.slice/crio-3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c WatchSource:0}: Error finding container 3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c: Status 404 returned error can't find the container with id 3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.217128 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.226703 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ad3cba7_fb7e_43f6_b818_4b2c392590e0.slice/crio-e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000 WatchSource:0}: Error finding container e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000: Status 404 returned error can't find the container with id e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000 Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.256918 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-ovndb-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-combined-ca-bundle\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-internal-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-public-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257244 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257355 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-httpd-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257423 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4g2\" (UniqueName: \"kubernetes.io/projected/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-kube-api-access-xz4g2\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.272566 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d7f6679db-zbdxv"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.280442 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.299061 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eddd0ab_42d6_4db0_b0db_eeb0259f4ec3.slice/crio-95fd24a4f0ef849e7c4f75feb035426268f37142c35c1d820c4bcc2e259e4dfd WatchSource:0}: Error finding container 95fd24a4f0ef849e7c4f75feb035426268f37142c35c1d820c4bcc2e259e4dfd: Status 404 returned error can't find the container with id 95fd24a4f0ef849e7c4f75feb035426268f37142c35c1d820c4bcc2e259e4dfd Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.363747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-httpd-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.363957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4g2\" (UniqueName: \"kubernetes.io/projected/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-kube-api-access-xz4g2\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364044 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-ovndb-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364121 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-combined-ca-bundle\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-internal-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364323 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-public-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364612 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.369512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.371419 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.374138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-combined-ca-bundle\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.391495 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-ovndb-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.399071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4g2\" (UniqueName: \"kubernetes.io/projected/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-kube-api-access-xz4g2\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.401299 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-httpd-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.402083 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-internal-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.403965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-public-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.420510 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config" (OuterVolumeSpecName: "config") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.429463 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-675f9657dc-6qw7m"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.429880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.433727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.468082 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.468374 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.468478 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.472726 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.473363 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c561af1_f926_4ced_9d2e_05778fed8a44.slice/crio-19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18 WatchSource:0}: Error finding container 19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18: Status 404 returned error can't find the container with id 19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18 Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.898430 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerStarted","Data":"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.918844 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerDied","Data":"0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.918930 4869 scope.go:117] "RemoveContainer" containerID="571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.919169 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.949719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" event={"ID":"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3","Type":"ContainerStarted","Data":"95fd24a4f0ef849e7c4f75feb035426268f37142c35c1d820c4bcc2e259e4dfd"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.955250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerStarted","Data":"3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.978707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-575599577-dmndq" event={"ID":"fc4c6770-5954-4777-8c4f-47397d045008","Type":"ContainerStarted","Data":"2f8f9684b1886cc82b30b6226705d756eea2f05b32d706f5455a6bb4ff96e63e"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.980405 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.009119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerStarted","Data":"b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.009563 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070755 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070842 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerStarted","Data":"eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070877 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerStarted","Data":"19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-675f9657dc-6qw7m" event={"ID":"18463ac0-a171-4ae0-9201-8df3d574eb70","Type":"ContainerStarted","Data":"4dd953267fa3787e6996b19cbf74956668a1fe03d2b2c1bab19ac6f07f3d8493"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerStarted","Data":"e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.073359 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-575599577-dmndq" podStartSLOduration=4.073337828 podStartE2EDuration="4.073337828s" podCreationTimestamp="2026-02-02 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:11.022010587 +0000 UTC m=+1192.666647357" watchObservedRunningTime="2026-02-02 14:53:11.073337828 +0000 UTC m=+1192.717974598" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.086663 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c4d7559c7-79dhq" podStartSLOduration=7.086638257 podStartE2EDuration="7.086638257s" podCreationTimestamp="2026-02-02 14:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:11.068517158 +0000 UTC m=+1192.713153928" watchObservedRunningTime="2026-02-02 14:53:11.086638257 +0000 UTC m=+1192.731275017" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.103249 4869 scope.go:117] "RemoveContainer" containerID="419d84c102f4f60e2c9ce52715ebe01d27cf44677cf9646b669ee52aa5fb04bc" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.272996 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bbd64cf97-7t5h5"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.284736 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dc5588748-k6f99"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.287025 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.296690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc5588748-k6f99"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-config-data\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422082 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-scripts\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvptk\" (UniqueName: \"kubernetes.io/projected/ec674145-26a6-4ce9-9e00-083bccdad283-kube-api-access-cvptk\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-internal-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422217 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-public-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec674145-26a6-4ce9-9e00-083bccdad283-logs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-combined-ca-bundle\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.488038 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" path="/var/lib/kubelet/pods/47cb4795-faf4-4845-8f4c-3675b5613437/volumes" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525593 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-config-data\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-scripts\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525856 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvptk\" (UniqueName: \"kubernetes.io/projected/ec674145-26a6-4ce9-9e00-083bccdad283-kube-api-access-cvptk\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-internal-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525955 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-public-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.526013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec674145-26a6-4ce9-9e00-083bccdad283-logs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.526069 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-combined-ca-bundle\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.529068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec674145-26a6-4ce9-9e00-083bccdad283-logs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.538901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-scripts\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.539085 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-combined-ca-bundle\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.539263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-public-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.541630 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-config-data\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.543364 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-internal-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.567654 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvptk\" (UniqueName: \"kubernetes.io/projected/ec674145-26a6-4ce9-9e00-083bccdad283-kube-api-access-cvptk\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.660331 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.083748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bbd64cf97-7t5h5" event={"ID":"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca","Type":"ContainerStarted","Data":"4b104cee6894c28ad44308cd6cf2d5f59a2244071ec6b719e2459022cf1481e0"} Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.087455 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerID="9d24ac1d4cb800028d8b0cae08d3371a0141fabf6b8ee870243781d99e8bd219" exitCode=0 Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.087540 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerDied","Data":"9d24ac1d4cb800028d8b0cae08d3371a0141fabf6b8ee870243781d99e8bd219"} Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.090348 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerStarted","Data":"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115"} Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.090534 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.091145 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.093752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerStarted","Data":"30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8"} Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.094005 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bb87b4954-l5h9p" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-api" containerID="cri-o://c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" gracePeriod=30 Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.094183 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bb87b4954-l5h9p" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" containerID="cri-o://5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" gracePeriod=30 Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.154630 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-79c776b57b-76pd5" podStartSLOduration=5.154609505 podStartE2EDuration="5.154609505s" podCreationTimestamp="2026-02-02 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:12.144393102 +0000 UTC m=+1193.789029882" watchObservedRunningTime="2026-02-02 14:53:12.154609505 +0000 UTC m=+1193.799246275" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.319461 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc5588748-k6f99"] Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.729854 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77794c6b74-fhtds"] Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.732856 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.742593 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.745069 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.802382 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77794c6b74-fhtds"] Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmlt\" (UniqueName: \"kubernetes.io/projected/bbb63205-2a5c-4177-8b7f-2a141324ba49-kube-api-access-kxmlt\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-public-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-internal-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904562 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-combined-ca-bundle\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb63205-2a5c-4177-8b7f-2a141324ba49-logs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904624 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data-custom\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.006846 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-combined-ca-bundle\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007469 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb63205-2a5c-4177-8b7f-2a141324ba49-logs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data-custom\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmlt\" (UniqueName: \"kubernetes.io/projected/bbb63205-2a5c-4177-8b7f-2a141324ba49-kube-api-access-kxmlt\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007626 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007712 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-public-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007772 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-internal-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.013884 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb63205-2a5c-4177-8b7f-2a141324ba49-logs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.015542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-public-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.015812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data-custom\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.016242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.016836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-combined-ca-bundle\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.017101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-internal-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.030231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmlt\" (UniqueName: \"kubernetes.io/projected/bbb63205-2a5c-4177-8b7f-2a141324ba49-kube-api-access-kxmlt\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.056430 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.114395 4869 generic.go:334] "Generic (PLEG): container finished" podID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerID="5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" exitCode=0 Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.114582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerDied","Data":"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.117387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerStarted","Data":"a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.118064 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.118129 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.119662 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc5588748-k6f99" event={"ID":"ec674145-26a6-4ce9-9e00-083bccdad283","Type":"ContainerStarted","Data":"44cccd8d0b052082992b6a91275c9579e26c2c63f40ad77c48f4d7adc5b83993"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.121230 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bbd64cf97-7t5h5" event={"ID":"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca","Type":"ContainerStarted","Data":"d016fa5ec7bdf0f7d1b45785f283fafd1908584e89557ab383231269829371d5"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.129874 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerStarted","Data":"639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.129948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.168443 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59bd6db9d6-z6bh8" podStartSLOduration=5.168414212 podStartE2EDuration="5.168414212s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:13.144593202 +0000 UTC m=+1194.789229972" watchObservedRunningTime="2026-02-02 14:53:13.168414212 +0000 UTC m=+1194.813050982" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.199281 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-ttvch" podStartSLOduration=5.199255146 podStartE2EDuration="5.199255146s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:13.176980884 +0000 UTC m=+1194.821617654" watchObservedRunningTime="2026-02-02 14:53:13.199255146 +0000 UTC m=+1194.843891916" Feb 02 14:53:14 crc kubenswrapper[4869]: I0202 14:53:14.149016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc5588748-k6f99" event={"ID":"ec674145-26a6-4ce9-9e00-083bccdad283","Type":"ContainerStarted","Data":"83a4d14bcbb12c200c324e8e3f81b3b7ed84ad9c08a61b317cc43995548b52c0"} Feb 02 14:53:14 crc kubenswrapper[4869]: I0202 14:53:14.693248 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77794c6b74-fhtds"] Feb 02 14:53:14 crc kubenswrapper[4869]: W0202 14:53:14.712652 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb63205_2a5c_4177_8b7f_2a141324ba49.slice/crio-4e070d4ea007a9a6c71eeb6c58e5ec5ab20834ed2bd179e24720cb52fb519609 WatchSource:0}: Error finding container 4e070d4ea007a9a6c71eeb6c58e5ec5ab20834ed2bd179e24720cb52fb519609: Status 404 returned error can't find the container with id 4e070d4ea007a9a6c71eeb6c58e5ec5ab20834ed2bd179e24720cb52fb519609 Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.251949 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-675f9657dc-6qw7m" event={"ID":"18463ac0-a171-4ae0-9201-8df3d574eb70","Type":"ContainerStarted","Data":"49aa13495ff012785f6cbad25793c330a84cac85dd60f37679961c9284263028"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.255532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerStarted","Data":"8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.258607 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerStarted","Data":"3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.264492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" event={"ID":"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3","Type":"ContainerStarted","Data":"9abd5fe1fa5ac24cf4114633dce2bf05ae28693402cee1f3e9d851b59359b889"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.266517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc5588748-k6f99" event={"ID":"ec674145-26a6-4ce9-9e00-083bccdad283","Type":"ContainerStarted","Data":"6fdef382ff95dd8ee1fd435776d623c3d9b832e9ad25c82012575a87654ba18d"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.267269 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.267346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.280595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bbd64cf97-7t5h5" event={"ID":"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca","Type":"ContainerStarted","Data":"2ee5470b5b8e5d5ff05e0d6e6d1c5495f32906d17a86a858aad17186fb901bbc"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.283142 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.286101 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" podStartSLOduration=3.224019918 podStartE2EDuration="7.286076566s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="2026-02-02 14:53:10.167684898 +0000 UTC m=+1191.812321668" lastFinishedPulling="2026-02-02 14:53:14.229741546 +0000 UTC m=+1195.874378316" observedRunningTime="2026-02-02 14:53:15.28219126 +0000 UTC m=+1196.926828020" watchObservedRunningTime="2026-02-02 14:53:15.286076566 +0000 UTC m=+1196.930713336" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.293884 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77794c6b74-fhtds" event={"ID":"bbb63205-2a5c-4177-8b7f-2a141324ba49","Type":"ContainerStarted","Data":"a7c45f780b4e93b2590a48689aea4853fdbf85cfa83b87ebb46b7331ac84ed9e"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.293973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77794c6b74-fhtds" event={"ID":"bbb63205-2a5c-4177-8b7f-2a141324ba49","Type":"ContainerStarted","Data":"4e070d4ea007a9a6c71eeb6c58e5ec5ab20834ed2bd179e24720cb52fb519609"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.304868 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.305086 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.305374 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.306607 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.306750 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5" gracePeriod=600 Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.336073 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-dc5588748-k6f99" podStartSLOduration=4.336043042 podStartE2EDuration="4.336043042s" podCreationTimestamp="2026-02-02 14:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:15.332587147 +0000 UTC m=+1196.977223917" watchObservedRunningTime="2026-02-02 14:53:15.336043042 +0000 UTC m=+1196.980679812" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.380879 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bbd64cf97-7t5h5" podStartSLOduration=5.380849592 podStartE2EDuration="5.380849592s" podCreationTimestamp="2026-02-02 14:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:15.361520173 +0000 UTC m=+1197.006156943" watchObservedRunningTime="2026-02-02 14:53:15.380849592 +0000 UTC m=+1197.025486362" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.307508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77794c6b74-fhtds" event={"ID":"bbb63205-2a5c-4177-8b7f-2a141324ba49","Type":"ContainerStarted","Data":"72b5e3e6869b43d44736a0a14489b839e5de3b97ac12618669703cb23d6c1f8b"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.309634 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.309670 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.311416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-675f9657dc-6qw7m" event={"ID":"18463ac0-a171-4ae0-9201-8df3d574eb70","Type":"ContainerStarted","Data":"e78201b2a276911e29e1b21ed47e7bb4f8fa0dfbac6e45a8ff947dc12f3a9c53"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.314023 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerStarted","Data":"4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.316773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s2dwg" event={"ID":"f0e63b99-6d06-44ea-a061-b9f79551126a","Type":"ContainerStarted","Data":"0aa88d3b57202e0e2723bae5c11f79197f7959d3a183ef080d27b30920dc1f8a"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.321455 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5" exitCode=0 Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.321555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.321680 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.321708 4869 scope.go:117] "RemoveContainer" containerID="132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.332528 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerStarted","Data":"6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.335216 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" event={"ID":"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3","Type":"ContainerStarted","Data":"c94b38428dfa7121ceddf733bc0447aacfd91627945553c335ebcd8fe2f0710b"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.401301 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77794c6b74-fhtds" podStartSLOduration=4.401260743 podStartE2EDuration="4.401260743s" podCreationTimestamp="2026-02-02 14:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:16.355258784 +0000 UTC m=+1197.999895554" watchObservedRunningTime="2026-02-02 14:53:16.401260743 +0000 UTC m=+1198.045897513" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.427831 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" podStartSLOduration=4.502266862 podStartE2EDuration="8.427798s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="2026-02-02 14:53:10.302036054 +0000 UTC m=+1191.946672824" lastFinishedPulling="2026-02-02 14:53:14.227567182 +0000 UTC m=+1195.872203962" observedRunningTime="2026-02-02 14:53:16.422387175 +0000 UTC m=+1198.067023955" watchObservedRunningTime="2026-02-02 14:53:16.427798 +0000 UTC m=+1198.072434770" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.546550 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-s2dwg" podStartSLOduration=4.402591351 podStartE2EDuration="46.546520409s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="2026-02-02 14:52:31.943471194 +0000 UTC m=+1153.588107964" lastFinishedPulling="2026-02-02 14:53:14.087400252 +0000 UTC m=+1195.732037022" observedRunningTime="2026-02-02 14:53:16.461179196 +0000 UTC m=+1198.105815966" watchObservedRunningTime="2026-02-02 14:53:16.546520409 +0000 UTC m=+1198.191157179" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.582141 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.583601 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-675f9657dc-6qw7m" podStartSLOduration=4.850293007 podStartE2EDuration="8.583577316s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="2026-02-02 14:53:10.494397576 +0000 UTC m=+1192.139034346" lastFinishedPulling="2026-02-02 14:53:14.227681885 +0000 UTC m=+1195.872318655" observedRunningTime="2026-02-02 14:53:16.483689073 +0000 UTC m=+1198.128325843" watchObservedRunningTime="2026-02-02 14:53:16.583577316 +0000 UTC m=+1198.228214086" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.603973 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.620569 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c9668db5f-6b8rj" podStartSLOduration=5.630583208 podStartE2EDuration="9.620536871s" podCreationTimestamp="2026-02-02 14:53:07 +0000 UTC" firstStartedPulling="2026-02-02 14:53:10.238396618 +0000 UTC m=+1191.883033388" lastFinishedPulling="2026-02-02 14:53:14.228350281 +0000 UTC m=+1195.872987051" observedRunningTime="2026-02-02 14:53:16.534949142 +0000 UTC m=+1198.179585932" watchObservedRunningTime="2026-02-02 14:53:16.620536871 +0000 UTC m=+1198.265173641" Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.387662 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-c9668db5f-6b8rj" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker-log" containerID="cri-o://8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8" gracePeriod=30 Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.387719 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener-log" containerID="cri-o://3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38" gracePeriod=30 Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.387720 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-c9668db5f-6b8rj" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker" containerID="cri-o://4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea" gracePeriod=30 Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.387847 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener" containerID="cri-o://6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9" gracePeriod=30 Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.896186 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.094172 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.095030 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" containerID="cri-o://a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974" gracePeriod=10 Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.426587 4869 generic.go:334] "Generic (PLEG): container finished" podID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerID="3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38" exitCode=143 Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.426706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerDied","Data":"3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38"} Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.444213 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerID="4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea" exitCode=0 Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.444263 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerID="8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8" exitCode=143 Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.444294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerDied","Data":"4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea"} Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.444330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerDied","Data":"8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8"} Feb 02 14:53:20 crc kubenswrapper[4869]: I0202 14:53:20.491984 4869 generic.go:334] "Generic (PLEG): container finished" podID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerID="a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974" exitCode=0 Feb 02 14:53:20 crc kubenswrapper[4869]: I0202 14:53:20.492104 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerDied","Data":"a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974"} Feb 02 14:53:20 crc kubenswrapper[4869]: I0202 14:53:20.504652 4869 generic.go:334] "Generic (PLEG): container finished" podID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerID="6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9" exitCode=0 Feb 02 14:53:20 crc kubenswrapper[4869]: I0202 14:53:20.504710 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerDied","Data":"6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9"} Feb 02 14:53:21 crc kubenswrapper[4869]: I0202 14:53:21.162632 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:21 crc kubenswrapper[4869]: I0202 14:53:21.519502 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Feb 02 14:53:21 crc kubenswrapper[4869]: I0202 14:53:21.817627 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:23 crc kubenswrapper[4869]: I0202 14:53:23.563039 4869 generic.go:334] "Generic (PLEG): container finished" podID="f0e63b99-6d06-44ea-a061-b9f79551126a" containerID="0aa88d3b57202e0e2723bae5c11f79197f7959d3a183ef080d27b30920dc1f8a" exitCode=0 Feb 02 14:53:23 crc kubenswrapper[4869]: I0202 14:53:23.563121 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s2dwg" event={"ID":"f0e63b99-6d06-44ea-a061-b9f79551126a","Type":"ContainerDied","Data":"0aa88d3b57202e0e2723bae5c11f79197f7959d3a183ef080d27b30920dc1f8a"} Feb 02 14:53:24 crc kubenswrapper[4869]: I0202 14:53:24.901172 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:24 crc kubenswrapper[4869]: I0202 14:53:24.979497 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.062104 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.062401 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59bd6db9d6-z6bh8" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api-log" containerID="cri-o://30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8" gracePeriod=30 Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.062816 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59bd6db9d6-z6bh8" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api" containerID="cri-o://a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287" gracePeriod=30 Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.611275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerDied","Data":"9d20104835b08533de4169d71a96c0b24b6f27636df1686a4f2724353347f5f4"} Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.611799 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d20104835b08533de4169d71a96c0b24b6f27636df1686a4f2724353347f5f4" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.619293 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s2dwg" event={"ID":"f0e63b99-6d06-44ea-a061-b9f79551126a","Type":"ContainerDied","Data":"86f6ff04cbc086ccbfd2e84539b1d96a49f77aa4c0aa0c0898599df70d3ebe0a"} Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.619354 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f6ff04cbc086ccbfd2e84539b1d96a49f77aa4c0aa0c0898599df70d3ebe0a" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.623097 4869 generic.go:334] "Generic (PLEG): container finished" podID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerID="30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8" exitCode=143 Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.624282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerDied","Data":"30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8"} Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.628525 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.641451 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.705579 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.705785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.705859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.705924 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706030 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706166 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706401 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706422 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706457 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.710966 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.734326 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts" (OuterVolumeSpecName: "scripts") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.734517 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw" (OuterVolumeSpecName: "kube-api-access-l9jzw") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "kube-api-access-l9jzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.744443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.744735 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv" (OuterVolumeSpecName: "kube-api-access-n8krv") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "kube-api-access-n8krv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813738 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813780 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813796 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813815 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813828 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.847890 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.853741 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data" (OuterVolumeSpecName: "config-data") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.880552 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config" (OuterVolumeSpecName: "config") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.882227 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.888441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.894331 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.916957 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917000 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917010 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917020 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917030 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917042 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.292205 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.328871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.329092 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.329157 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.329226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.329290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.334874 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs" (OuterVolumeSpecName: "logs") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.338074 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg" (OuterVolumeSpecName: "kube-api-access-5tcqg") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "kube-api-access-5tcqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.344365 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.361867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.383430 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data" (OuterVolumeSpecName: "config-data") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431480 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431540 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431557 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431568 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431582 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerDied","Data":"e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000"} Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638811 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638807 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638831 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638847 4869 scope.go:117] "RemoveContainer" containerID="4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.746724 4869 scope.go:117] "RemoveContainer" containerID="8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.781212 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.804324 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.814399 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.835941 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.840129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.840240 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.840276 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.840849 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs" (OuterVolumeSpecName: "logs") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.841010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.841164 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.843054 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.847848 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.855083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.855229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq" (OuterVolumeSpecName: "kube-api-access-j2njq") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "kube-api-access-j2njq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.896952 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.945228 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.945284 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.945302 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.950622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data" (OuterVolumeSpecName: "config-data") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.041369 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.041821 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener-log" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.041845 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener-log" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.041865 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" containerName="cinder-db-sync" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.041875 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" containerName="cinder-db-sync" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.041892 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="init" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.041901 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="init" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.042006 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042019 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.042031 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042039 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.042060 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker-log" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042070 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker-log" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.042081 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042090 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042349 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener-log" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042406 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042434 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042449 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042463 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" containerName="cinder-db-sync" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042614 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker-log" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.043812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.047449 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.050678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-92dp9" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.050678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.050892 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.050950 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.079158 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.155747 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.155835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.157721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.158088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.158347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.158411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.193839 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.200195 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.232462 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261254 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261284 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261325 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261407 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261565 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261595 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261618 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261723 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.265925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.265936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.266984 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.267071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.282128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.342377 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.344187 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.352781 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.358410 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363639 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363712 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363773 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363852 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363878 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.365544 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.365835 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363901 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.366074 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.366161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.366288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.367356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.370081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.374160 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.393747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.467675 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470188 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470745 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.471607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.472198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.474202 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.476418 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.476938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.477614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.478936 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" path="/var/lib/kubelet/pods/09d16c44-bf33-426a-ae17-9ec52f7c4bdf/volumes" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.479695 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" path="/var/lib/kubelet/pods/4ad3cba7-fb7e-43f6-b818-4b2c392590e0/volumes" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.496560 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.540015 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.682169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.748521 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75"} Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.749138 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.749624 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="proxy-httpd" containerID="cri-o://40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75" gracePeriod=30 Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.749887 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="sg-core" containerID="cri-o://32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb" gracePeriod=30 Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.749955 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-notification-agent" containerID="cri-o://905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121" gracePeriod=30 Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.750826 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-central-agent" containerID="cri-o://3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49" gracePeriod=30 Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.793634 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.609012135 podStartE2EDuration="57.793604435s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="2026-02-02 14:52:32.269553226 +0000 UTC m=+1153.914189996" lastFinishedPulling="2026-02-02 14:53:26.454145526 +0000 UTC m=+1208.098782296" observedRunningTime="2026-02-02 14:53:27.780208933 +0000 UTC m=+1209.424845703" watchObservedRunningTime="2026-02-02 14:53:27.793604435 +0000 UTC m=+1209.438241205" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.826491 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerDied","Data":"eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb"} Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.826560 4869 scope.go:117] "RemoveContainer" containerID="6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.826812 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.906844 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.944264 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.982183 4869 scope.go:117] "RemoveContainer" containerID="3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38" Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.058403 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.425598 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.517145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.847183 4869 generic.go:334] "Generic (PLEG): container finished" podID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerID="a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287" exitCode=0 Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.847258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerDied","Data":"a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.853221 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerStarted","Data":"803147708c69b2f495d5e0819fb5fcae8a7b960c9ff123b14eea9ec0607d19e2"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.869126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerStarted","Data":"3aa5c96598f9d84b8ea60ab2f8542911baacbe20302c3b591676275481c40de5"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883661 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerID="40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75" exitCode=0 Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883707 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerID="32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb" exitCode=2 Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883718 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerID="3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49" exitCode=0 Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883834 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883847 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.897004 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerStarted","Data":"f2e7accbcbe637e8c09e5e8b0f36dc637fc3678eaf4f2f32a1c64ce436c7b4d7"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.919479 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.027988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.029231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.029378 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.029498 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.029525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.031631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs" (OuterVolumeSpecName: "logs") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.033791 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.036752 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj" (OuterVolumeSpecName: "kube-api-access-7qkxj") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "kube-api-access-7qkxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.058029 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.088211 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data" (OuterVolumeSpecName: "config-data") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141112 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141155 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141169 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141180 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141190 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.410528 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.482809 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" path="/var/lib/kubelet/pods/2b3a4838-a42e-4ff4-a4b2-7dd079089a42/volumes" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.912518 4869 generic.go:334] "Generic (PLEG): container finished" podID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerID="e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f" exitCode=0 Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.913214 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerDied","Data":"e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.918110 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerID="905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121" exitCode=0 Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.918204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.918246 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"9a54c86921d5b0ef544bfd0a64a504e7bbbc4ab3d0006b551a598232317f2a2b"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.918263 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a54c86921d5b0ef544bfd0a64a504e7bbbc4ab3d0006b551a598232317f2a2b" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.925242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerStarted","Data":"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.929804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerDied","Data":"19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.929879 4869 scope.go:117] "RemoveContainer" containerID="a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.929980 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.940918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerStarted","Data":"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5"} Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.042740 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.052013 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.059979 4869 scope.go:117] "RemoveContainer" containerID="30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.060600 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.176665 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178212 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178350 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178425 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178473 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178564 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.179128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.179325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.184269 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5" (OuterVolumeSpecName: "kube-api-access-n68q5") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "kube-api-access-n68q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.187341 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts" (OuterVolumeSpecName: "scripts") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.212233 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.262721 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281258 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281303 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281318 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281330 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281343 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281356 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.293602 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data" (OuterVolumeSpecName: "config-data") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.382522 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.954419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerStarted","Data":"c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6"} Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.956390 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.962932 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerStarted","Data":"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e"} Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.963144 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.963170 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api-log" containerID="cri-o://c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" gracePeriod=30 Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.963189 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api" containerID="cri-o://3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" gracePeriod=30 Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.971440 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.971438 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerStarted","Data":"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2"} Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.003641 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" podStartSLOduration=4.003610559 podStartE2EDuration="4.003610559s" podCreationTimestamp="2026-02-02 14:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:30.976628891 +0000 UTC m=+1212.621265661" watchObservedRunningTime="2026-02-02 14:53:31.003610559 +0000 UTC m=+1212.648247329" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.006266 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.076521869 podStartE2EDuration="4.006251615s" podCreationTimestamp="2026-02-02 14:53:27 +0000 UTC" firstStartedPulling="2026-02-02 14:53:28.078419045 +0000 UTC m=+1209.723055825" lastFinishedPulling="2026-02-02 14:53:29.008148801 +0000 UTC m=+1210.652785571" observedRunningTime="2026-02-02 14:53:31.004171894 +0000 UTC m=+1212.648808664" watchObservedRunningTime="2026-02-02 14:53:31.006251615 +0000 UTC m=+1212.650888415" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.051473 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.051440933 podStartE2EDuration="4.051440933s" podCreationTimestamp="2026-02-02 14:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:31.036395851 +0000 UTC m=+1212.681032621" watchObservedRunningTime="2026-02-02 14:53:31.051440933 +0000 UTC m=+1212.696077703" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.063369 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.076840 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.088617 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089219 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-central-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089249 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-central-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089281 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-notification-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089291 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-notification-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089305 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="sg-core" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="sg-core" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089326 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089335 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089356 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="proxy-httpd" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089364 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="proxy-httpd" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089374 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api-log" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089381 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api-log" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089595 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-central-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089632 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089650 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="sg-core" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089672 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-notification-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089687 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api-log" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089706 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="proxy-httpd" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.091902 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.099342 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.099481 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.101721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.101945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102215 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102335 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102419 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102487 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102678 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.119111 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.204776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.204978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205069 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205128 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.206042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.206934 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.213042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.237823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.238834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.240252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.240695 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.481126 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.494511 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" path="/var/lib/kubelet/pods/9c561af1-f926-4ced-9d2e-05778fed8a44/volumes" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.495831 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" path="/var/lib/kubelet/pods/fe3740ce-c24a-48b4-aab3-d1da5bf36089/volumes" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.785462 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.945802 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.946869 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.946974 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947429 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs" (OuterVolumeSpecName: "logs") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.948142 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.948169 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.953461 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw" (OuterVolumeSpecName: "kube-api-access-cm2jw") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "kube-api-access-cm2jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.953572 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts" (OuterVolumeSpecName: "scripts") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.979300 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.981148 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.988750 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" exitCode=0 Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.988806 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" exitCode=143 Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.990533 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.991404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerDied","Data":"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e"} Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.991446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerDied","Data":"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295"} Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.991463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerDied","Data":"f2e7accbcbe637e8c09e5e8b0f36dc637fc3678eaf4f2f32a1c64ce436c7b4d7"} Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.991483 4869 scope.go:117] "RemoveContainer" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.013734 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data" (OuterVolumeSpecName: "config-data") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.017402 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.031667 4869 scope.go:117] "RemoveContainer" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050079 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050127 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050141 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050151 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050161 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.061939 4869 scope.go:117] "RemoveContainer" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" Feb 02 14:53:32 crc kubenswrapper[4869]: E0202 14:53:32.062625 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": container with ID starting with 3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e not found: ID does not exist" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.062683 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e"} err="failed to get container status \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": rpc error: code = NotFound desc = could not find container \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": container with ID starting with 3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e not found: ID does not exist" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.062720 4869 scope.go:117] "RemoveContainer" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" Feb 02 14:53:32 crc kubenswrapper[4869]: E0202 14:53:32.063099 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": container with ID starting with c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295 not found: ID does not exist" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063144 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295"} err="failed to get container status \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": rpc error: code = NotFound desc = could not find container \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": container with ID starting with c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295 not found: ID does not exist" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063175 4869 scope.go:117] "RemoveContainer" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063406 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e"} err="failed to get container status \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": rpc error: code = NotFound desc = could not find container \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": container with ID starting with 3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e not found: ID does not exist" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063437 4869 scope.go:117] "RemoveContainer" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295"} err="failed to get container status \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": rpc error: code = NotFound desc = could not find container \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": container with ID starting with c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295 not found: ID does not exist" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.257739 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-bb87b4954-l5h9p" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.141:9696/\": dial tcp 10.217.0.141:9696: connect: connection refused" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.335697 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.355941 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.366734 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: E0202 14:53:32.367443 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.367475 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api" Feb 02 14:53:32 crc kubenswrapper[4869]: E0202 14:53:32.367529 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api-log" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.367542 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api-log" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.367795 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api-log" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.367834 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.369230 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.373017 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.373321 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.375390 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.377101 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.378577 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs27z\" (UniqueName: \"kubernetes.io/projected/1fbb1ee0-3403-49aa-9e5c-3926dd981751-kube-api-access-rs27z\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568261 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568355 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fbb1ee0-3403-49aa-9e5c-3926dd981751-logs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568391 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-scripts\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568526 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568659 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.569004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fbb1ee0-3403-49aa-9e5c-3926dd981751-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671279 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fbb1ee0-3403-49aa-9e5c-3926dd981751-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671383 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs27z\" (UniqueName: \"kubernetes.io/projected/1fbb1ee0-3403-49aa-9e5c-3926dd981751-kube-api-access-rs27z\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fbb1ee0-3403-49aa-9e5c-3926dd981751-logs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671505 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-scripts\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671651 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.672152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fbb1ee0-3403-49aa-9e5c-3926dd981751-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.672787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fbb1ee0-3403-49aa-9e5c-3926dd981751-logs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.678479 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.678506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.679614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-scripts\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.679894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.683879 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.685869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.696667 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs27z\" (UniqueName: \"kubernetes.io/projected/1fbb1ee0-3403-49aa-9e5c-3926dd981751-kube-api-access-rs27z\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.986529 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:33 crc kubenswrapper[4869]: I0202 14:53:33.004967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"2a3a8afa5f4f39b9c1443825049b785119a54a533b4cf3c5d4655fb9914dd6f0"} Feb 02 14:53:33 crc kubenswrapper[4869]: I0202 14:53:33.444268 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:33 crc kubenswrapper[4869]: I0202 14:53:33.476143 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" path="/var/lib/kubelet/pods/b6c7f465-f9c2-4384-9c28-18d85ff08928/volumes" Feb 02 14:53:34 crc kubenswrapper[4869]: I0202 14:53:34.016048 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689"} Feb 02 14:53:34 crc kubenswrapper[4869]: I0202 14:53:34.019067 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fbb1ee0-3403-49aa-9e5c-3926dd981751","Type":"ContainerStarted","Data":"ddea4a48b8633b4394cc12365b06bb9f9213034a3028ea7a9e898361896bc268"} Feb 02 14:53:34 crc kubenswrapper[4869]: I0202 14:53:34.019120 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fbb1ee0-3403-49aa-9e5c-3926dd981751","Type":"ContainerStarted","Data":"404436d8eaf31f75b99403a98828292902d7571560b26df1ebe76d9a5c3c9e59"} Feb 02 14:53:34 crc kubenswrapper[4869]: I0202 14:53:34.764842 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:35 crc kubenswrapper[4869]: I0202 14:53:35.033508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505"} Feb 02 14:53:35 crc kubenswrapper[4869]: I0202 14:53:35.036608 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fbb1ee0-3403-49aa-9e5c-3926dd981751","Type":"ContainerStarted","Data":"ff3e0f7641de5392159f1e81cf81a107d01673a12900046b51f5863f5740bed3"} Feb 02 14:53:35 crc kubenswrapper[4869]: I0202 14:53:35.036872 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 14:53:35 crc kubenswrapper[4869]: I0202 14:53:35.083318 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.083290193 podStartE2EDuration="3.083290193s" podCreationTimestamp="2026-02-02 14:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:35.060958301 +0000 UTC m=+1216.705595071" watchObservedRunningTime="2026-02-02 14:53:35.083290193 +0000 UTC m=+1216.727926963" Feb 02 14:53:36 crc kubenswrapper[4869]: I0202 14:53:36.049621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876"} Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.542480 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.624800 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.625141 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-ttvch" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="dnsmasq-dns" containerID="cri-o://639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55" gracePeriod=10 Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.679887 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.784128 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.086617 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerID="639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55" exitCode=0 Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.087286 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="cinder-scheduler" containerID="cri-o://b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" gracePeriod=30 Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.087414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerDied","Data":"639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55"} Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.087722 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="probe" containerID="cri-o://54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" gracePeriod=30 Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.219171 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419462 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419553 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419827 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419867 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419941 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.431579 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp" (OuterVolumeSpecName: "kube-api-access-k2rmp") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "kube-api-access-k2rmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.480198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config" (OuterVolumeSpecName: "config") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.488419 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.506336 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.515161 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522672 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522710 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522723 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522734 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522752 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.110833 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerID="54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" exitCode=0 Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.110984 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerDied","Data":"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2"} Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.114652 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerDied","Data":"3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c"} Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.114791 4869 scope.go:117] "RemoveContainer" containerID="639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.115566 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.134439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f"} Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.144288 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.183095 4869 scope.go:117] "RemoveContainer" containerID="9d24ac1d4cb800028d8b0cae08d3371a0141fabf6b8ee870243781d99e8bd219" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.257518 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.494692736 podStartE2EDuration="8.257488826s" podCreationTimestamp="2026-02-02 14:53:31 +0000 UTC" firstStartedPulling="2026-02-02 14:53:32.031808573 +0000 UTC m=+1213.676445343" lastFinishedPulling="2026-02-02 14:53:37.794604663 +0000 UTC m=+1219.439241433" observedRunningTime="2026-02-02 14:53:39.219123876 +0000 UTC m=+1220.863760646" watchObservedRunningTime="2026-02-02 14:53:39.257488826 +0000 UTC m=+1220.902125606" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.287310 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.293922 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.476364 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" path="/var/lib/kubelet/pods/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4/volumes" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.579351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.640143 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.420896 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.462611 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.557859 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.558208 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c4d7559c7-79dhq" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-api" containerID="cri-o://a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023" gracePeriod=30 Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.558825 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c4d7559c7-79dhq" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-httpd" containerID="cri-o://b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb" gracePeriod=30 Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.161262 4869 generic.go:334] "Generic (PLEG): container finished" podID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerID="b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb" exitCode=0 Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.161368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerDied","Data":"b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb"} Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.746118 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.827857 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828398 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828506 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828547 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828589 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828787 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.829549 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.838094 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts" (OuterVolumeSpecName: "scripts") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.838175 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.865011 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr" (OuterVolumeSpecName: "kube-api-access-8bpnr") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "kube-api-access-8bpnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.908590 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.936379 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.936711 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.936798 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.936877 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.960061 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data" (OuterVolumeSpecName: "config-data") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.042387 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174013 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerID="b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" exitCode=0 Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174063 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerDied","Data":"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5"} Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerDied","Data":"803147708c69b2f495d5e0819fb5fcae8a7b960c9ff123b14eea9ec0607d19e2"} Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174112 4869 scope.go:117] "RemoveContainer" containerID="54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174215 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.219547 4869 scope.go:117] "RemoveContainer" containerID="b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.220532 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.243031 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.256211 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.257203 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="dnsmasq-dns" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.257223 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="dnsmasq-dns" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.257238 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="init" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.257244 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="init" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.257256 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="cinder-scheduler" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.257263 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="cinder-scheduler" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.257674 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="probe" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.257685 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="probe" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.258063 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="dnsmasq-dns" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.258097 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="probe" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.258117 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="cinder-scheduler" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.259827 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.263313 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.277551 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.282940 4869 scope.go:117] "RemoveContainer" containerID="54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.289373 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2\": container with ID starting with 54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2 not found: ID does not exist" containerID="54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.289438 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2"} err="failed to get container status \"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2\": rpc error: code = NotFound desc = could not find container \"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2\": container with ID starting with 54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2 not found: ID does not exist" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.289476 4869 scope.go:117] "RemoveContainer" containerID="b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.296273 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5\": container with ID starting with b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5 not found: ID does not exist" containerID="b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.296348 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5"} err="failed to get container status \"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5\": rpc error: code = NotFound desc = could not find container \"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5\": container with ID starting with b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5 not found: ID does not exist" Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371388 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-803147708c69b2f495d5e0819fb5fcae8a7b960c9ff123b14eea9ec0607d19e2": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-803147708c69b2f495d5e0819fb5fcae8a7b960c9ff123b14eea9ec0607d19e2: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371578 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-f2e7accbcbe637e8c09e5e8b0f36dc637fc3678eaf4f2f32a1c64ce436c7b4d7": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-f2e7accbcbe637e8c09e5e8b0f36dc637fc3678eaf4f2f32a1c64ce436c7b4d7: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371751 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0c79bc_79ef_4876_b621_25ff976ecad2.slice/crio-conmon-e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0c79bc_79ef_4876_b621_25ff976ecad2.slice/crio-conmon-e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371781 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0c79bc_79ef_4876_b621_25ff976ecad2.slice/crio-e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0c79bc_79ef_4876_b621_25ff976ecad2.slice/crio-e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371800 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-conmon-c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-conmon-c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371818 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.373791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.373845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.374107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.374371 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.374694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2sd2\" (UniqueName: \"kubernetes.io/projected/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-kube-api-access-v2sd2\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.374859 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.379872 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-conmon-b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-conmon-b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.379976 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.380002 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-conmon-3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-conmon-3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.380029 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.380315 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-conmon-54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-conmon-54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.380344 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480344 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2sd2\" (UniqueName: \"kubernetes.io/projected/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-kube-api-access-v2sd2\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480367 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.490155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.490427 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.494665 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.502506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.516901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2sd2\" (UniqueName: \"kubernetes.io/projected/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-kube-api-access-v2sd2\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.584502 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.615779 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7fa8424_d792_4e4f_bd02_d7369407b5ad.slice/crio-b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb918eb2a_3cab_422f_ba7d_f06c4ec21ef4.slice/crio-conmon-c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7fa8424_d792_4e4f_bd02_d7369407b5ad.slice/crio-conmon-b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1dcc76_d41e_4492_95d0_dcbb0b1254b4.slice/crio-639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1dcc76_d41e_4492_95d0_dcbb0b1254b4.slice/crio-3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c\": RecentStats: unable to find data in memory cache]" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.120464 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bb87b4954-l5h9p_b918eb2a-3cab-422f-ba7d-f06c4ec21ef4/neutron-api/0.log" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.121139 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189789 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bb87b4954-l5h9p_b918eb2a-3cab-422f-ba7d-f06c4ec21ef4/neutron-api/0.log" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189846 4869 generic.go:334] "Generic (PLEG): container finished" podID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerID="c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" exitCode=137 Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerDied","Data":"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a"} Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189941 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerDied","Data":"120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791"} Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189963 4869 scope.go:117] "RemoveContainer" containerID="5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.190199 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200387 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200438 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.210085 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj" (OuterVolumeSpecName: "kube-api-access-6wztj") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "kube-api-access-6wztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.224259 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.241123 4869 scope.go:117] "RemoveContainer" containerID="c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.254737 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.265213 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.296686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config" (OuterVolumeSpecName: "config") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.304161 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.304202 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.304214 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.304227 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.321264 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.323681 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.342462 4869 scope.go:117] "RemoveContainer" containerID="5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" Feb 02 14:53:43 crc kubenswrapper[4869]: E0202 14:53:43.343508 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d\": container with ID starting with 5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d not found: ID does not exist" containerID="5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.343571 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d"} err="failed to get container status \"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d\": rpc error: code = NotFound desc = could not find container \"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d\": container with ID starting with 5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d not found: ID does not exist" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.343611 4869 scope.go:117] "RemoveContainer" containerID="c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" Feb 02 14:53:43 crc kubenswrapper[4869]: E0202 14:53:43.343929 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a\": container with ID starting with c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a not found: ID does not exist" containerID="c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.343952 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a"} err="failed to get container status \"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a\": rpc error: code = NotFound desc = could not find container \"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a\": container with ID starting with c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a not found: ID does not exist" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.429022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.431239 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.431698 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-79c776b57b-76pd5" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-log" containerID="cri-o://cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" gracePeriod=30 Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.432273 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-79c776b57b-76pd5" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-api" containerID="cri-o://c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" gracePeriod=30 Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.497388 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" path="/var/lib/kubelet/pods/a1598fcb-466e-4c4c-8429-1a211bfcfc19/volumes" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.510641 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 14:53:43 crc kubenswrapper[4869]: E0202 14:53:43.511050 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-api" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.511071 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-api" Feb 02 14:53:43 crc kubenswrapper[4869]: E0202 14:53:43.511118 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.511131 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.511326 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-api" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.511359 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.512689 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.516293 4869 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.522806 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.522886 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-v6krz" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.523472 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.542087 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.599750 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.616784 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.619539 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config-secret\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.619621 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6mqr\" (UniqueName: \"kubernetes.io/projected/9c3c55b0-c9be-4635-9562-347406f90dff-kube-api-access-k6mqr\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.619677 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.619711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.721526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config-secret\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.721612 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6mqr\" (UniqueName: \"kubernetes.io/projected/9c3c55b0-c9be-4635-9562-347406f90dff-kube-api-access-k6mqr\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.721674 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.721718 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.722983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.727732 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.730874 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config-secret\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.745459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6mqr\" (UniqueName: \"kubernetes.io/projected/9c3c55b0-c9be-4635-9562-347406f90dff-kube-api-access-k6mqr\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.890807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 14:53:44 crc kubenswrapper[4869]: I0202 14:53:44.292184 4869 generic.go:334] "Generic (PLEG): container finished" podID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerID="cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" exitCode=143 Feb 02 14:53:44 crc kubenswrapper[4869]: I0202 14:53:44.293609 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerDied","Data":"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d"} Feb 02 14:53:44 crc kubenswrapper[4869]: I0202 14:53:44.296661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8f007a5-a428-44ff-8c6d-5de0d08beb7c","Type":"ContainerStarted","Data":"8a26183b7c7e9706d3d6df35e1fc3c81acb49df13d6ac4dddb74f90a9b0c75d8"} Feb 02 14:53:44 crc kubenswrapper[4869]: I0202 14:53:44.514762 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 14:53:44 crc kubenswrapper[4869]: W0202 14:53:44.514951 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c3c55b0_c9be_4635_9562_347406f90dff.slice/crio-298a55e7ac3d14d5a229c579fff16094e1a70a819d3fd2fbd748606633424f01 WatchSource:0}: Error finding container 298a55e7ac3d14d5a229c579fff16094e1a70a819d3fd2fbd748606633424f01: Status 404 returned error can't find the container with id 298a55e7ac3d14d5a229c579fff16094e1a70a819d3fd2fbd748606633424f01 Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.313883 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8f007a5-a428-44ff-8c6d-5de0d08beb7c","Type":"ContainerStarted","Data":"906a2f9e990fbc8c5e19d425341489eff99b3f77f960f279696d24c68004ddda"} Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.314373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8f007a5-a428-44ff-8c6d-5de0d08beb7c","Type":"ContainerStarted","Data":"291fc9b297d71d064b6e249c4f7f64024554cdfb1d9bed064aa5dd85c2bb63d6"} Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.317939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9c3c55b0-c9be-4635-9562-347406f90dff","Type":"ContainerStarted","Data":"298a55e7ac3d14d5a229c579fff16094e1a70a819d3fd2fbd748606633424f01"} Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.345999 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.345972288 podStartE2EDuration="3.345972288s" podCreationTimestamp="2026-02-02 14:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:45.340272997 +0000 UTC m=+1226.984909767" watchObservedRunningTime="2026-02-02 14:53:45.345972288 +0000 UTC m=+1226.990609058" Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.477130 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" path="/var/lib/kubelet/pods/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4/volumes" Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.589680 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.350893 4869 generic.go:334] "Generic (PLEG): container finished" podID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerID="a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023" exitCode=0 Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.351126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerDied","Data":"a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023"} Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.541195 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613388 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613409 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613466 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613653 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613965 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.644668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.660212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf" (OuterVolumeSpecName: "kube-api-access-pfjmf") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "kube-api-access-pfjmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.687347 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.708128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config" (OuterVolumeSpecName: "config") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.710128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.717274 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.719396 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.719420 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.719433 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.719446 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.733413 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.763971 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.822391 4869 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.823069 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.974794 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.030734 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.030932 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031042 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031185 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031799 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031868 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.038990 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9" (OuterVolumeSpecName: "kube-api-access-kk2f9") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "kube-api-access-kk2f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.040873 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts" (OuterVolumeSpecName: "scripts") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.041631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs" (OuterVolumeSpecName: "logs") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.137380 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.137638 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.137700 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.154562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data" (OuterVolumeSpecName: "config-data") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.166116 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.226492 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.240686 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.240729 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.240744 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.245914 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.345589 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.371602 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerDied","Data":"45f00cd48b456ba32635e74b444d036ced51d5190a5131b65618e8664fdb1787"} Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.371676 4869 scope.go:117] "RemoveContainer" containerID="b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.371679 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.376761 4869 generic.go:334] "Generic (PLEG): container finished" podID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerID="c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" exitCode=0 Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.376826 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.376843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerDied","Data":"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115"} Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.376887 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerDied","Data":"f71f18fd5c51bc2ff8e4203c7e7213ae442d57834261ba22fc6581334d9a1f73"} Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.432828 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.438819 4869 scope.go:117] "RemoveContainer" containerID="a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.446434 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.533717 4869 scope.go:117] "RemoveContainer" containerID="c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.536744 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" path="/var/lib/kubelet/pods/9a6e5980-cab0-4c02-9d50-0633106097cb/volumes" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.537504 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.537535 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.585853 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.601155 4869 scope.go:117] "RemoveContainer" containerID="cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.655125 4869 scope.go:117] "RemoveContainer" containerID="c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" Feb 02 14:53:47 crc kubenswrapper[4869]: E0202 14:53:47.656236 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115\": container with ID starting with c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115 not found: ID does not exist" containerID="c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.656300 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115"} err="failed to get container status \"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115\": rpc error: code = NotFound desc = could not find container \"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115\": container with ID starting with c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115 not found: ID does not exist" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.656365 4869 scope.go:117] "RemoveContainer" containerID="cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" Feb 02 14:53:47 crc kubenswrapper[4869]: E0202 14:53:47.658189 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d\": container with ID starting with cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d not found: ID does not exist" containerID="cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.658217 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d"} err="failed to get container status \"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d\": rpc error: code = NotFound desc = could not find container \"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d\": container with ID starting with cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d not found: ID does not exist" Feb 02 14:53:49 crc kubenswrapper[4869]: I0202 14:53:49.474239 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" path="/var/lib/kubelet/pods/c7fa8424-d792-4e4f-bd02-d7369407b5ad/volumes" Feb 02 14:53:52 crc kubenswrapper[4869]: I0202 14:53:52.849479 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 14:53:57 crc kubenswrapper[4869]: I0202 14:53:57.501135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9c3c55b0-c9be-4635-9562-347406f90dff","Type":"ContainerStarted","Data":"266c16280253b1077268ac63c782114a693c22a38707b7b1728ac8ec0d489988"} Feb 02 14:53:57 crc kubenswrapper[4869]: I0202 14:53:57.524449 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.591373014 podStartE2EDuration="14.524420244s" podCreationTimestamp="2026-02-02 14:53:43 +0000 UTC" firstStartedPulling="2026-02-02 14:53:44.524262787 +0000 UTC m=+1226.168899557" lastFinishedPulling="2026-02-02 14:53:56.457310017 +0000 UTC m=+1238.101946787" observedRunningTime="2026-02-02 14:53:57.520867256 +0000 UTC m=+1239.165504026" watchObservedRunningTime="2026-02-02 14:53:57.524420244 +0000 UTC m=+1239.169057014" Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.265405 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.265790 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-central-agent" containerID="cri-o://494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" gracePeriod=30 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.266413 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" containerID="cri-o://062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" gracePeriod=30 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.266521 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-notification-agent" containerID="cri-o://5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" gracePeriod=30 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.266586 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="sg-core" containerID="cri-o://603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" gracePeriod=30 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.282788 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.157:3000/\": EOF" Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.523128 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerID="062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" exitCode=0 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.524323 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerID="603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" exitCode=2 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.525525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f"} Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.525656 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876"} Feb 02 14:53:59 crc kubenswrapper[4869]: I0202 14:53:59.536996 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerID="494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" exitCode=0 Feb 02 14:53:59 crc kubenswrapper[4869]: I0202 14:53:59.537083 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689"} Feb 02 14:54:01 crc kubenswrapper[4869]: I0202 14:54:01.483179 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.157:3000/\": dial tcp 10.217.0.157:3000: connect: connection refused" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.192332 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 14:54:02 crc kubenswrapper[4869]: E0202 14:54:02.193440 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-httpd" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193466 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-httpd" Feb 02 14:54:02 crc kubenswrapper[4869]: E0202 14:54:02.193492 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-api" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-api" Feb 02 14:54:02 crc kubenswrapper[4869]: E0202 14:54:02.193520 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-api" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193528 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-api" Feb 02 14:54:02 crc kubenswrapper[4869]: E0202 14:54:02.193550 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-log" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193566 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-log" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193759 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-log" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193787 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-api" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193798 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-httpd" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193818 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-api" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.194812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.203880 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.287808 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.289936 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.299255 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.299818 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.300133 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.300287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.307228 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.395256 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.399268 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.402580 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.402665 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.402718 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.402739 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.403890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.404326 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.410263 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.414105 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.418509 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.436281 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.443610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.451566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.527226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.527465 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.538184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.551820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.561793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.598765 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.614575 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.648191 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.650188 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.659291 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.667972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.668068 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.668163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.668209 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.669497 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.669881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.672039 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.701749 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.702418 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.737045 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.770455 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.770740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.820970 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.822824 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.833754 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.837254 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.872849 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.873020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.873106 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.873164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.874262 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.895537 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.897667 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.978743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.979391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.980739 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.002405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.081296 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.189743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.327860 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.372390 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.386774 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1748ab6_c795_414c_a52b_7bf549358524.slice/crio-3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821 WatchSource:0}: Error finding container 3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821: Status 404 returned error can't find the container with id 3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821 Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.524965 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.534279 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7ca155_a072_4915_b5c5_e0b36a29af9b.slice/crio-16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa WatchSource:0}: Error finding container 16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa: Status 404 returned error can't find the container with id 16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.616738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z9ktw" event={"ID":"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27","Type":"ContainerStarted","Data":"44b61834eee1c536aa0f35eec95eea4815501cb97e71d1d71bf2626e5b553f43"} Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.618081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9kpbk" event={"ID":"b1748ab6-c795-414c-a52b-7bf549358524","Type":"ContainerStarted","Data":"3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821"} Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.619796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gssfn" event={"ID":"dc7ca155-a072-4915-b5c5-e0b36a29af9b","Type":"ContainerStarted","Data":"16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa"} Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.716473 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c50ffbc_cc89_4adc_ae61_9100df4a3ba1.slice/crio-8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2 WatchSource:0}: Error finding container 8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2: Status 404 returned error can't find the container with id 8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2 Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.716498 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.803345 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.830204 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdcf5e33_de9f_408f_8200_6f42fe0d0771.slice/crio-7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930 WatchSource:0}: Error finding container 7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930: Status 404 returned error can't find the container with id 7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930 Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.912717 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.914685 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ff7e998_18b9_4fbe_906a_d756f7cf16c6.slice/crio-3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253 WatchSource:0}: Error finding container 3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253: Status 404 returned error can't find the container with id 3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253 Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.253455 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.331717 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.331866 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.331968 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.332058 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.332091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.332126 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.332182 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.334742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.340898 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.364238 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts" (OuterVolumeSpecName: "scripts") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.365178 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml" (OuterVolumeSpecName: "kube-api-access-h68ml") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "kube-api-access-h68ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.403154 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438454 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438513 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438529 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438548 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438563 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.563299 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.632084 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data" (OuterVolumeSpecName: "config-data") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.646980 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerID="5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" exitCode=0 Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647066 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"2a3a8afa5f4f39b9c1443825049b785119a54a533b4cf3c5d4655fb9914dd6f0"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647155 4869 scope.go:117] "RemoveContainer" containerID="062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647354 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647925 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.648853 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.658386 4869 generic.go:334] "Generic (PLEG): container finished" podID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" containerID="48561ec38ba8e1d863e22aea7226f624c163b5e704dc9c40612b25be2fba3af4" exitCode=0 Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.658543 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z9ktw" event={"ID":"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27","Type":"ContainerDied","Data":"48561ec38ba8e1d863e22aea7226f624c163b5e704dc9c40612b25be2fba3af4"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.669131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9kpbk" event={"ID":"b1748ab6-c795-414c-a52b-7bf549358524","Type":"ContainerStarted","Data":"94cbdab87b048c1314f2f73c2a849ceaf199319d9270e621070be8b05d642b46"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.685509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" event={"ID":"bdcf5e33-de9f-408f-8200-6f42fe0d0771","Type":"ContainerStarted","Data":"99575408197da6f36edff3800154367961b49a995c8eac1c98ed312b3e5cddeb"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.685566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" event={"ID":"bdcf5e33-de9f-408f-8200-6f42fe0d0771","Type":"ContainerStarted","Data":"7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.694549 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-68d6-account-create-update-6m8ng" event={"ID":"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1","Type":"ContainerStarted","Data":"d596a1a6b4874f02790897366970dbb255c9422002d2101a6f5f167dd8807bca"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.694630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-68d6-account-create-update-6m8ng" event={"ID":"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1","Type":"ContainerStarted","Data":"8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.700932 4869 generic.go:334] "Generic (PLEG): container finished" podID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" containerID="65c894d6caff283d8e12ca5ca2f52f63ea73a840cf785e78685f2636257f7088" exitCode=0 Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.701046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gssfn" event={"ID":"dc7ca155-a072-4915-b5c5-e0b36a29af9b","Type":"ContainerDied","Data":"65c894d6caff283d8e12ca5ca2f52f63ea73a840cf785e78685f2636257f7088"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.709821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" event={"ID":"0ff7e998-18b9-4fbe-906a-d756f7cf16c6","Type":"ContainerStarted","Data":"7a8d84378031a92f9cb60c774081e0424ba60a9436ccfe3c735c843dfed27fbb"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.709920 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" event={"ID":"0ff7e998-18b9-4fbe-906a-d756f7cf16c6","Type":"ContainerStarted","Data":"3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.721071 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-9kpbk" podStartSLOduration=2.721038806 podStartE2EDuration="2.721038806s" podCreationTimestamp="2026-02-02 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:54:04.711615223 +0000 UTC m=+1246.356251993" watchObservedRunningTime="2026-02-02 14:54:04.721038806 +0000 UTC m=+1246.365675576" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.741372 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" podStartSLOduration=2.741343808 podStartE2EDuration="2.741343808s" podCreationTimestamp="2026-02-02 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:54:04.737342568 +0000 UTC m=+1246.381979348" watchObservedRunningTime="2026-02-02 14:54:04.741343808 +0000 UTC m=+1246.385980578" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.764813 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-68d6-account-create-update-6m8ng" podStartSLOduration=2.7647886379999997 podStartE2EDuration="2.764788638s" podCreationTimestamp="2026-02-02 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:54:04.757970239 +0000 UTC m=+1246.402607009" watchObservedRunningTime="2026-02-02 14:54:04.764788638 +0000 UTC m=+1246.409425398" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.810372 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" podStartSLOduration=2.810347764 podStartE2EDuration="2.810347764s" podCreationTimestamp="2026-02-02 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:54:04.801502965 +0000 UTC m=+1246.446139735" watchObservedRunningTime="2026-02-02 14:54:04.810347764 +0000 UTC m=+1246.454984524" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.893168 4869 scope.go:117] "RemoveContainer" containerID="603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.905224 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.928671 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.937228 4869 scope.go:117] "RemoveContainer" containerID="5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964126 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:04 crc kubenswrapper[4869]: E0202 14:54:04.964819 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="sg-core" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964836 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="sg-core" Feb 02 14:54:04 crc kubenswrapper[4869]: E0202 14:54:04.964856 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964867 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" Feb 02 14:54:04 crc kubenswrapper[4869]: E0202 14:54:04.964881 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-notification-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964890 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-notification-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: E0202 14:54:04.964931 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-central-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964943 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-central-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.965173 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="sg-core" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.965201 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-notification-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.965216 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.965229 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-central-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.970386 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.986742 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.988370 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.988456 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.000213 4869 scope.go:117] "RemoveContainer" containerID="494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.054695 4869 scope.go:117] "RemoveContainer" containerID="062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" Feb 02 14:54:05 crc kubenswrapper[4869]: E0202 14:54:05.055545 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f\": container with ID starting with 062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f not found: ID does not exist" containerID="062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.055585 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f"} err="failed to get container status \"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f\": rpc error: code = NotFound desc = could not find container \"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f\": container with ID starting with 062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f not found: ID does not exist" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.055611 4869 scope.go:117] "RemoveContainer" containerID="603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" Feb 02 14:54:05 crc kubenswrapper[4869]: E0202 14:54:05.056356 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876\": container with ID starting with 603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876 not found: ID does not exist" containerID="603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.056431 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876"} err="failed to get container status \"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876\": rpc error: code = NotFound desc = could not find container \"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876\": container with ID starting with 603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876 not found: ID does not exist" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.056470 4869 scope.go:117] "RemoveContainer" containerID="5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" Feb 02 14:54:05 crc kubenswrapper[4869]: E0202 14:54:05.057377 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505\": container with ID starting with 5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505 not found: ID does not exist" containerID="5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.057497 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505"} err="failed to get container status \"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505\": rpc error: code = NotFound desc = could not find container \"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505\": container with ID starting with 5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505 not found: ID does not exist" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.057546 4869 scope.go:117] "RemoveContainer" containerID="494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" Feb 02 14:54:05 crc kubenswrapper[4869]: E0202 14:54:05.059229 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689\": container with ID starting with 494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689 not found: ID does not exist" containerID="494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.059585 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689"} err="failed to get container status \"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689\": rpc error: code = NotFound desc = could not find container \"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689\": container with ID starting with 494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689 not found: ID does not exist" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061833 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.062014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.062038 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166492 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166537 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.167247 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.167713 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.172299 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.173389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.173589 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.174121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.185781 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.307281 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.475149 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" path="/var/lib/kubelet/pods/aa9b6032-666f-44cb-849e-b82c50dc030a/volumes" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.725932 4869 generic.go:334] "Generic (PLEG): container finished" podID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" containerID="d596a1a6b4874f02790897366970dbb255c9422002d2101a6f5f167dd8807bca" exitCode=0 Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.726615 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-68d6-account-create-update-6m8ng" event={"ID":"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1","Type":"ContainerDied","Data":"d596a1a6b4874f02790897366970dbb255c9422002d2101a6f5f167dd8807bca"} Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.734358 4869 generic.go:334] "Generic (PLEG): container finished" podID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" containerID="7a8d84378031a92f9cb60c774081e0424ba60a9436ccfe3c735c843dfed27fbb" exitCode=0 Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.734443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" event={"ID":"0ff7e998-18b9-4fbe-906a-d756f7cf16c6","Type":"ContainerDied","Data":"7a8d84378031a92f9cb60c774081e0424ba60a9436ccfe3c735c843dfed27fbb"} Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.747528 4869 generic.go:334] "Generic (PLEG): container finished" podID="b1748ab6-c795-414c-a52b-7bf549358524" containerID="94cbdab87b048c1314f2f73c2a849ceaf199319d9270e621070be8b05d642b46" exitCode=0 Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.747591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9kpbk" event={"ID":"b1748ab6-c795-414c-a52b-7bf549358524","Type":"ContainerDied","Data":"94cbdab87b048c1314f2f73c2a849ceaf199319d9270e621070be8b05d642b46"} Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.752095 4869 generic.go:334] "Generic (PLEG): container finished" podID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" containerID="99575408197da6f36edff3800154367961b49a995c8eac1c98ed312b3e5cddeb" exitCode=0 Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.752344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" event={"ID":"bdcf5e33-de9f-408f-8200-6f42fe0d0771","Type":"ContainerDied","Data":"99575408197da6f36edff3800154367961b49a995c8eac1c98ed312b3e5cddeb"} Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.813163 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.899209 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:05 crc kubenswrapper[4869]: W0202 14:54:05.901113 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57ed2c6_7be3_4db2_919b_6cc161df175a.slice/crio-a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b WatchSource:0}: Error finding container a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b: Status 404 returned error can't find the container with id a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.292170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.302712 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.401538 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") pod \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.402876 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") pod \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.402990 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") pod \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.403096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") pod \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.404149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc7ca155-a072-4915-b5c5-e0b36a29af9b" (UID: "dc7ca155-a072-4915-b5c5-e0b36a29af9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.404684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" (UID: "d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.409850 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp" (OuterVolumeSpecName: "kube-api-access-nrvbp") pod "dc7ca155-a072-4915-b5c5-e0b36a29af9b" (UID: "dc7ca155-a072-4915-b5c5-e0b36a29af9b"). InnerVolumeSpecName "kube-api-access-nrvbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.411925 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68" (OuterVolumeSpecName: "kube-api-access-h9p68") pod "d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" (UID: "d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27"). InnerVolumeSpecName "kube-api-access-h9p68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.510570 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.510612 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.510624 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.510633 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.773118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gssfn" event={"ID":"dc7ca155-a072-4915-b5c5-e0b36a29af9b","Type":"ContainerDied","Data":"16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa"} Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.773643 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.773170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.776222 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37"} Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.776283 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b"} Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.791240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z9ktw" event={"ID":"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27","Type":"ContainerDied","Data":"44b61834eee1c536aa0f35eec95eea4815501cb97e71d1d71bf2626e5b553f43"} Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.791333 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b61834eee1c536aa0f35eec95eea4815501cb97e71d1d71bf2626e5b553f43" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.791499 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.165499 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.335538 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") pod \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.335593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") pod \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.336763 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdcf5e33-de9f-408f-8200-6f42fe0d0771" (UID: "bdcf5e33-de9f-408f-8200-6f42fe0d0771"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.349389 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv" (OuterVolumeSpecName: "kube-api-access-rrrgv") pod "bdcf5e33-de9f-408f-8200-6f42fe0d0771" (UID: "bdcf5e33-de9f-408f-8200-6f42fe0d0771"). InnerVolumeSpecName "kube-api-access-rrrgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.386772 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.400100 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.412414 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.438807 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") pod \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.438980 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") pod \"b1748ab6-c795-414c-a52b-7bf549358524\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439012 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") pod \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") pod \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") pod \"b1748ab6-c795-414c-a52b-7bf549358524\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") pod \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439585 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439599 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.440474 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1748ab6-c795-414c-a52b-7bf549358524" (UID: "b1748ab6-c795-414c-a52b-7bf549358524"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.440622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ff7e998-18b9-4fbe-906a-d756f7cf16c6" (UID: "0ff7e998-18b9-4fbe-906a-d756f7cf16c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.441234 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" (UID: "2c50ffbc-cc89-4adc-ae61-9100df4a3ba1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.451115 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw" (OuterVolumeSpecName: "kube-api-access-7h8fw") pod "0ff7e998-18b9-4fbe-906a-d756f7cf16c6" (UID: "0ff7e998-18b9-4fbe-906a-d756f7cf16c6"). InnerVolumeSpecName "kube-api-access-7h8fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.451186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz" (OuterVolumeSpecName: "kube-api-access-k8trz") pod "b1748ab6-c795-414c-a52b-7bf549358524" (UID: "b1748ab6-c795-414c-a52b-7bf549358524"). InnerVolumeSpecName "kube-api-access-k8trz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.454588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm" (OuterVolumeSpecName: "kube-api-access-n66bm") pod "2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" (UID: "2c50ffbc-cc89-4adc-ae61-9100df4a3ba1"). InnerVolumeSpecName "kube-api-access-n66bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541158 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541209 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541227 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541239 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541251 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541266 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.810331 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9kpbk" event={"ID":"b1748ab6-c795-414c-a52b-7bf549358524","Type":"ContainerDied","Data":"3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821"} Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.810786 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.810389 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.813238 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.813233 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" event={"ID":"bdcf5e33-de9f-408f-8200-6f42fe0d0771","Type":"ContainerDied","Data":"7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930"} Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.813307 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.816534 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.816572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-68d6-account-create-update-6m8ng" event={"ID":"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1","Type":"ContainerDied","Data":"8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2"} Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.816643 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.827130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" event={"ID":"0ff7e998-18b9-4fbe-906a-d756f7cf16c6","Type":"ContainerDied","Data":"3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253"} Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.827187 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.827277 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:08 crc kubenswrapper[4869]: I0202 14:54:08.841025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5"} Feb 02 14:54:09 crc kubenswrapper[4869]: I0202 14:54:09.852569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f"} Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.872742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e"} Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873085 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="proxy-httpd" containerID="cri-o://5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e" gracePeriod=30 Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873095 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="sg-core" containerID="cri-o://2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f" gracePeriod=30 Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873097 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-notification-agent" containerID="cri-o://ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5" gracePeriod=30 Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873138 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-central-agent" containerID="cri-o://387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37" gracePeriod=30 Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873704 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.905199 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.375821266 podStartE2EDuration="7.905169238s" podCreationTimestamp="2026-02-02 14:54:04 +0000 UTC" firstStartedPulling="2026-02-02 14:54:05.907007163 +0000 UTC m=+1247.551643943" lastFinishedPulling="2026-02-02 14:54:11.436355155 +0000 UTC m=+1253.080991915" observedRunningTime="2026-02-02 14:54:11.900644646 +0000 UTC m=+1253.545281426" watchObservedRunningTime="2026-02-02 14:54:11.905169238 +0000 UTC m=+1253.549806008" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.819279 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820146 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1748ab6-c795-414c-a52b-7bf549358524" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820166 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1748ab6-c795-414c-a52b-7bf549358524" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820188 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820195 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820209 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820216 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820227 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820234 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820249 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820255 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820267 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820273 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820429 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1748ab6-c795-414c-a52b-7bf549358524" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820444 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820464 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820475 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820485 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820494 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.821149 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.824085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.824085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wfkgs" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.824388 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.836492 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899135 4869 generic.go:334] "Generic (PLEG): container finished" podID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerID="5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e" exitCode=0 Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899194 4869 generic.go:334] "Generic (PLEG): container finished" podID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerID="2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f" exitCode=2 Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899204 4869 generic.go:334] "Generic (PLEG): container finished" podID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerID="ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5" exitCode=0 Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e"} Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899300 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f"} Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899315 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5"} Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.964673 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.964722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.964762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.964794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.067902 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.067999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.068043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.068078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.075005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.076554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.077106 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.089441 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.140621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.652081 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.911306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" event={"ID":"100a5963-124e-4354-8b5a-fadefef2a0a4","Type":"ContainerStarted","Data":"cd4c7fb90fab4fd4c0d2e3de0824c4a040e7e86423a38a960666cd32c520f1dd"} Feb 02 14:54:18 crc kubenswrapper[4869]: I0202 14:54:18.988387 4869 generic.go:334] "Generic (PLEG): container finished" podID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerID="387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37" exitCode=0 Feb 02 14:54:18 crc kubenswrapper[4869]: I0202 14:54:18.988474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37"} Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.025646 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b"} Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.026506 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.082947 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.284286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.284809 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.284966 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.284996 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285475 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285520 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285756 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285956 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.286199 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.289600 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts" (OuterVolumeSpecName: "scripts") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.289747 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws" (OuterVolumeSpecName: "kube-api-access-98jws") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "kube-api-access-98jws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.314757 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.368397 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.383744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data" (OuterVolumeSpecName: "config-data") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387329 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387390 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387407 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387418 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387431 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387443 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.055661 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.055992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" event={"ID":"100a5963-124e-4354-8b5a-fadefef2a0a4","Type":"ContainerStarted","Data":"ebe1f428461f9ca88e79225425980e308f9e983a005ecc404634b54d8fbf41b8"} Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.106489 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" podStartSLOduration=2.955524879 podStartE2EDuration="11.106463219s" podCreationTimestamp="2026-02-02 14:54:12 +0000 UTC" firstStartedPulling="2026-02-02 14:54:13.660794212 +0000 UTC m=+1255.305430982" lastFinishedPulling="2026-02-02 14:54:21.811732542 +0000 UTC m=+1263.456369322" observedRunningTime="2026-02-02 14:54:23.088031163 +0000 UTC m=+1264.732667933" watchObservedRunningTime="2026-02-02 14:54:23.106463219 +0000 UTC m=+1264.751099989" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.126949 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.139212 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.152178 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:23 crc kubenswrapper[4869]: E0202 14:54:23.153231 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-notification-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.153367 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-notification-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: E0202 14:54:23.153459 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="proxy-httpd" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.153543 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="proxy-httpd" Feb 02 14:54:23 crc kubenswrapper[4869]: E0202 14:54:23.153629 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-central-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.153695 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-central-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: E0202 14:54:23.153797 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="sg-core" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.153863 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="sg-core" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.154217 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="proxy-httpd" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.154316 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-central-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.154413 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="sg-core" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.154488 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-notification-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.156568 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.161657 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.161979 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.163080 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.209405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.209795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.209966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.210098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.210408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.210601 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.210701 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311625 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311713 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311756 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311894 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311951 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.312384 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.313511 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.319145 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.319291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.326754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.329865 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.336635 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.481142 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" path="/var/lib/kubelet/pods/d57ed2c6-7be3-4db2-919b-6cc161df175a/volumes" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.493963 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:24 crc kubenswrapper[4869]: I0202 14:54:24.010987 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:24 crc kubenswrapper[4869]: I0202 14:54:24.024939 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:54:24 crc kubenswrapper[4869]: I0202 14:54:24.073176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"dbafcb0e5e084df3fe80d818d3e6101e9afd6d736ce2a1f056810697e37884cd"} Feb 02 14:54:24 crc kubenswrapper[4869]: I0202 14:54:24.445503 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:25 crc kubenswrapper[4869]: I0202 14:54:25.087292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5"} Feb 02 14:54:26 crc kubenswrapper[4869]: I0202 14:54:26.100393 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588"} Feb 02 14:54:29 crc kubenswrapper[4869]: I0202 14:54:29.140278 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842"} Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b"} Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189878 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189582 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-notification-agent" containerID="cri-o://e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588" gracePeriod=30 Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189421 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="proxy-httpd" containerID="cri-o://0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b" gracePeriod=30 Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189363 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-central-agent" containerID="cri-o://ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5" gracePeriod=30 Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189515 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="sg-core" containerID="cri-o://674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842" gracePeriod=30 Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.228431 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.864205765 podStartE2EDuration="11.228403966s" podCreationTimestamp="2026-02-02 14:54:23 +0000 UTC" firstStartedPulling="2026-02-02 14:54:24.024523261 +0000 UTC m=+1265.669160031" lastFinishedPulling="2026-02-02 14:54:33.388721472 +0000 UTC m=+1275.033358232" observedRunningTime="2026-02-02 14:54:34.218835439 +0000 UTC m=+1275.863472219" watchObservedRunningTime="2026-02-02 14:54:34.228403966 +0000 UTC m=+1275.873040736" Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.202890 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f88376b-53a4-4124-abbe-510899dd905e" containerID="0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b" exitCode=0 Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.202973 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f88376b-53a4-4124-abbe-510899dd905e" containerID="674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842" exitCode=2 Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.202956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b"} Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.203038 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842"} Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.203054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5"} Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.202985 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f88376b-53a4-4124-abbe-510899dd905e" containerID="ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5" exitCode=0 Feb 02 14:54:36 crc kubenswrapper[4869]: I0202 14:54:36.219434 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f88376b-53a4-4124-abbe-510899dd905e" containerID="e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588" exitCode=0 Feb 02 14:54:36 crc kubenswrapper[4869]: I0202 14:54:36.219512 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588"} Feb 02 14:54:36 crc kubenswrapper[4869]: I0202 14:54:36.959147 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116199 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116353 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116687 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116833 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.117165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.117385 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.117424 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.127279 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts" (OuterVolumeSpecName: "scripts") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.129297 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2" (OuterVolumeSpecName: "kube-api-access-vc5r2") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "kube-api-access-vc5r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.152004 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.194683 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219283 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219499 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219577 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219641 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219711 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219871 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data" (OuterVolumeSpecName: "config-data") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.233623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"dbafcb0e5e084df3fe80d818d3e6101e9afd6d736ce2a1f056810697e37884cd"} Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.234345 4869 scope.go:117] "RemoveContainer" containerID="0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.233998 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.287532 4869 scope.go:117] "RemoveContainer" containerID="674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.301724 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.313188 4869 scope.go:117] "RemoveContainer" containerID="e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.315943 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.326931 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347005 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:37 crc kubenswrapper[4869]: E0202 14:54:37.347676 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="sg-core" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347702 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="sg-core" Feb 02 14:54:37 crc kubenswrapper[4869]: E0202 14:54:37.347751 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-notification-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347762 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-notification-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: E0202 14:54:37.347787 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="proxy-httpd" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347794 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="proxy-httpd" Feb 02 14:54:37 crc kubenswrapper[4869]: E0202 14:54:37.347818 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-central-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347828 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-central-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.348168 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="proxy-httpd" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.348201 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="sg-core" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.348225 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-notification-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.348244 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-central-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.350971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.354749 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.358217 4869 scope.go:117] "RemoveContainer" containerID="ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.358839 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.359865 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.481264 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f88376b-53a4-4124-abbe-510899dd905e" path="/var/lib/kubelet/pods/2f88376b-53a4-4124-abbe-510899dd905e/volumes" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.530673 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.530758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.530783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.531005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.531205 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.531431 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.531527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.634936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635195 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635279 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.636055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.643115 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.643210 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.644876 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.645124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.652862 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.687483 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:38 crc kubenswrapper[4869]: I0202 14:54:38.150667 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:38 crc kubenswrapper[4869]: I0202 14:54:38.244718 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"2ee7ad043782b76a75c638017ecf8eb737d1dae5d41ae89149f1f57042e858c0"} Feb 02 14:54:39 crc kubenswrapper[4869]: I0202 14:54:39.260205 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294"} Feb 02 14:54:40 crc kubenswrapper[4869]: I0202 14:54:40.276119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20"} Feb 02 14:54:41 crc kubenswrapper[4869]: I0202 14:54:41.291505 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad"} Feb 02 14:54:45 crc kubenswrapper[4869]: I0202 14:54:45.332749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be"} Feb 02 14:54:45 crc kubenswrapper[4869]: I0202 14:54:45.365525 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.219052887 podStartE2EDuration="8.365483668s" podCreationTimestamp="2026-02-02 14:54:37 +0000 UTC" firstStartedPulling="2026-02-02 14:54:38.158405579 +0000 UTC m=+1279.803042349" lastFinishedPulling="2026-02-02 14:54:44.30483637 +0000 UTC m=+1285.949473130" observedRunningTime="2026-02-02 14:54:45.35706738 +0000 UTC m=+1287.001704170" watchObservedRunningTime="2026-02-02 14:54:45.365483668 +0000 UTC m=+1287.010120438" Feb 02 14:54:46 crc kubenswrapper[4869]: I0202 14:54:46.342621 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:54:59 crc kubenswrapper[4869]: I0202 14:54:59.474835 4869 generic.go:334] "Generic (PLEG): container finished" podID="100a5963-124e-4354-8b5a-fadefef2a0a4" containerID="ebe1f428461f9ca88e79225425980e308f9e983a005ecc404634b54d8fbf41b8" exitCode=0 Feb 02 14:54:59 crc kubenswrapper[4869]: I0202 14:54:59.474938 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" event={"ID":"100a5963-124e-4354-8b5a-fadefef2a0a4","Type":"ContainerDied","Data":"ebe1f428461f9ca88e79225425980e308f9e983a005ecc404634b54d8fbf41b8"} Feb 02 14:55:00 crc kubenswrapper[4869]: I0202 14:55:00.922049 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.088278 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") pod \"100a5963-124e-4354-8b5a-fadefef2a0a4\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.088501 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") pod \"100a5963-124e-4354-8b5a-fadefef2a0a4\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.088560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") pod \"100a5963-124e-4354-8b5a-fadefef2a0a4\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.088667 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") pod \"100a5963-124e-4354-8b5a-fadefef2a0a4\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.096760 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts" (OuterVolumeSpecName: "scripts") pod "100a5963-124e-4354-8b5a-fadefef2a0a4" (UID: "100a5963-124e-4354-8b5a-fadefef2a0a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.097202 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6" (OuterVolumeSpecName: "kube-api-access-zhzn6") pod "100a5963-124e-4354-8b5a-fadefef2a0a4" (UID: "100a5963-124e-4354-8b5a-fadefef2a0a4"). InnerVolumeSpecName "kube-api-access-zhzn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.119813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data" (OuterVolumeSpecName: "config-data") pod "100a5963-124e-4354-8b5a-fadefef2a0a4" (UID: "100a5963-124e-4354-8b5a-fadefef2a0a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.122406 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "100a5963-124e-4354-8b5a-fadefef2a0a4" (UID: "100a5963-124e-4354-8b5a-fadefef2a0a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.191542 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.191606 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.191622 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.191634 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.501301 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.501328 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" event={"ID":"100a5963-124e-4354-8b5a-fadefef2a0a4","Type":"ContainerDied","Data":"cd4c7fb90fab4fd4c0d2e3de0824c4a040e7e86423a38a960666cd32c520f1dd"} Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.502084 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4c7fb90fab4fd4c0d2e3de0824c4a040e7e86423a38a960666cd32c520f1dd" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.607266 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 14:55:01 crc kubenswrapper[4869]: E0202 14:55:01.607731 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100a5963-124e-4354-8b5a-fadefef2a0a4" containerName="nova-cell0-conductor-db-sync" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.607754 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="100a5963-124e-4354-8b5a-fadefef2a0a4" containerName="nova-cell0-conductor-db-sync" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.607973 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="100a5963-124e-4354-8b5a-fadefef2a0a4" containerName="nova-cell0-conductor-db-sync" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.608674 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.614015 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wfkgs" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.614081 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.623005 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.703892 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.704104 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fsx\" (UniqueName: \"kubernetes.io/projected/87abe16e-c4e3-4869-8f9e-6f9b46106c51-kube-api-access-s8fsx\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.704228 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.805876 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.806073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.806137 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8fsx\" (UniqueName: \"kubernetes.io/projected/87abe16e-c4e3-4869-8f9e-6f9b46106c51-kube-api-access-s8fsx\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.811008 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.811486 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.827110 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8fsx\" (UniqueName: \"kubernetes.io/projected/87abe16e-c4e3-4869-8f9e-6f9b46106c51-kube-api-access-s8fsx\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.937438 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:02 crc kubenswrapper[4869]: I0202 14:55:02.418440 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 14:55:02 crc kubenswrapper[4869]: I0202 14:55:02.520353 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"87abe16e-c4e3-4869-8f9e-6f9b46106c51","Type":"ContainerStarted","Data":"510d3cd9cfeb8407252b63cdc3df3a7e1fe5b732180ef10f604fe381970cc172"} Feb 02 14:55:03 crc kubenswrapper[4869]: I0202 14:55:03.532079 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"87abe16e-c4e3-4869-8f9e-6f9b46106c51","Type":"ContainerStarted","Data":"582753a8e542fb7ee4048af3bb221d1c4681b0c6141b86732bb4af1a53b70250"} Feb 02 14:55:03 crc kubenswrapper[4869]: I0202 14:55:03.590089 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.590055423 podStartE2EDuration="2.590055423s" podCreationTimestamp="2026-02-02 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:03.577269177 +0000 UTC m=+1305.221905947" watchObservedRunningTime="2026-02-02 14:55:03.590055423 +0000 UTC m=+1305.234692193" Feb 02 14:55:04 crc kubenswrapper[4869]: I0202 14:55:04.541733 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:07 crc kubenswrapper[4869]: I0202 14:55:07.695039 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 14:55:10 crc kubenswrapper[4869]: I0202 14:55:10.674464 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:10 crc kubenswrapper[4869]: I0202 14:55:10.675034 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerName="kube-state-metrics" containerID="cri-o://ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.237816 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.423616 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") pod \"52d7887e-0487-4179-a0af-6f51b9eed8e7\" (UID: \"52d7887e-0487-4179-a0af-6f51b9eed8e7\") " Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.431407 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j" (OuterVolumeSpecName: "kube-api-access-jsw9j") pod "52d7887e-0487-4179-a0af-6f51b9eed8e7" (UID: "52d7887e-0487-4179-a0af-6f51b9eed8e7"). InnerVolumeSpecName "kube-api-access-jsw9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.526464 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.612988 4869 generic.go:334] "Generic (PLEG): container finished" podID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerID="ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" exitCode=2 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.613060 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"52d7887e-0487-4179-a0af-6f51b9eed8e7","Type":"ContainerDied","Data":"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3"} Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.613088 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.613113 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"52d7887e-0487-4179-a0af-6f51b9eed8e7","Type":"ContainerDied","Data":"be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103"} Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.613140 4869 scope.go:117] "RemoveContainer" containerID="ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.641447 4869 scope.go:117] "RemoveContainer" containerID="ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" Feb 02 14:55:11 crc kubenswrapper[4869]: E0202 14:55:11.642072 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3\": container with ID starting with ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3 not found: ID does not exist" containerID="ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.642132 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3"} err="failed to get container status \"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3\": rpc error: code = NotFound desc = could not find container \"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3\": container with ID starting with ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3 not found: ID does not exist" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.642216 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.654934 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.677894 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: E0202 14:55:11.682966 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerName="kube-state-metrics" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.682998 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerName="kube-state-metrics" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.683195 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerName="kube-state-metrics" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.683865 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.687640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.687851 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.697005 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.833268 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.833344 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.833445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.833537 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7zx\" (UniqueName: \"kubernetes.io/projected/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-api-access-lr7zx\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.838734 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.839364 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-central-agent" containerID="cri-o://94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.840053 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-notification-agent" containerID="cri-o://53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.840060 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="proxy-httpd" containerID="cri-o://247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.840783 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="sg-core" containerID="cri-o://03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.938453 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.939098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.939282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.939443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr7zx\" (UniqueName: \"kubernetes.io/projected/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-api-access-lr7zx\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.945365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.953798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.954346 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.963851 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr7zx\" (UniqueName: \"kubernetes.io/projected/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-api-access-lr7zx\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.993056 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.005735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.541001 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:12 crc kubenswrapper[4869]: W0202 14:55:12.542982 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc78d1b99_1b30_416f_9afc_3dda8204e757.slice/crio-cae1aeeb25f5633f3f70367ef86ad6aa92025f4c803c8bb1901a57265bae83e9 WatchSource:0}: Error finding container cae1aeeb25f5633f3f70367ef86ad6aa92025f4c803c8bb1901a57265bae83e9: Status 404 returned error can't find the container with id cae1aeeb25f5633f3f70367ef86ad6aa92025f4c803c8bb1901a57265bae83e9 Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.625206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c78d1b99-1b30-416f-9afc-3dda8204e757","Type":"ContainerStarted","Data":"cae1aeeb25f5633f3f70367ef86ad6aa92025f4c803c8bb1901a57265bae83e9"} Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631605 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerID="247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be" exitCode=0 Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631652 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerID="03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad" exitCode=2 Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631665 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerID="94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294" exitCode=0 Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631686 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be"} Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631709 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad"} Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294"} Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.702945 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.704487 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.709825 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.710441 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.713475 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.853778 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.855182 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.868663 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.868772 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.868817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.868843 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.869133 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.912589 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.949035 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.952857 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.966022 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977764 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977895 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977950 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977974 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.978012 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.978078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.001542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.006971 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.012221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.023075 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.056799 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.075818 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.078498 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.081007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.081517 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.081697 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.081878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.082011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.082138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.089986 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.105101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.115506 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.152606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.159118 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.184856 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.184956 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185081 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.191431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.205451 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.217142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.219601 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.226886 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.230201 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.231665 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.235210 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.289229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.292780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.293280 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.293560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.299931 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.308782 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.311647 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.314336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.334541 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.337670 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.405221 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.407293 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.414674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.415321 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.415711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.415889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.432122 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.495771 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" path="/var/lib/kubelet/pods/52d7887e-0487-4179-a0af-6f51b9eed8e7/volumes" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517666 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517761 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517825 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.525416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.533728 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.534615 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.557775 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.608748 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.619683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.619747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.619781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.620017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.620056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.620537 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.621298 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.622481 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.622505 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.625162 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.662251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c78d1b99-1b30-416f-9afc-3dda8204e757","Type":"ContainerStarted","Data":"cdb90b94df8a6b5eaccd1c3364bfc4782ff72f3abb60923d8194df14a63b981d"} Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.664132 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.664800 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.763685 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.854175 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.373956972 podStartE2EDuration="2.854151477s" podCreationTimestamp="2026-02-02 14:55:11 +0000 UTC" firstStartedPulling="2026-02-02 14:55:12.546262125 +0000 UTC m=+1314.190898895" lastFinishedPulling="2026-02-02 14:55:13.02645663 +0000 UTC m=+1314.671093400" observedRunningTime="2026-02-02 14:55:13.693328961 +0000 UTC m=+1315.337965731" watchObservedRunningTime="2026-02-02 14:55:13.854151477 +0000 UTC m=+1315.498788247" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.859292 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.016069 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.019253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.025548 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.025670 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.042944 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.112818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.166362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.166467 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.166502 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.166597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.178158 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.271508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.271671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.271737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.271953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.281367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.285032 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.293718 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.295291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.378751 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.436303 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.556467 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.584039 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:14 crc kubenswrapper[4869]: W0202 14:55:14.678458 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddabd5514_892f_4f35_a9ca_2bf4cde0f5f5.slice/crio-db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7 WatchSource:0}: Error finding container db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7: Status 404 returned error can't find the container with id db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7 Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.717377 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2bx2t" event={"ID":"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0","Type":"ContainerStarted","Data":"8a6758018e930eb35d181b72a0bf4424ef8cce214eee1037a29cee9e990a3ae0"} Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.722740 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1a29990-0400-4b85-86fe-2a00b5809576","Type":"ContainerStarted","Data":"0f50f5a7419043a9c8e4096aa4798378e9fbf6f1d58cf6115d2fbee8f617e5fe"} Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.729754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerStarted","Data":"5be81fda9a826f7e54ad4ca6e6d929236a63542303c28bf9d0e22fa1ebc93458"} Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.733764 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2","Type":"ContainerStarted","Data":"51ac651ddd93f893e6d3273b647d0ad831e6db906a9c89298fdc003ced36fdc1"} Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.743474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerStarted","Data":"9e1c8170bbe27458021229751e306804c8d9eb43efb07049fd479764776f395c"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.230251 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.304292 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.304367 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.759983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfr68" event={"ID":"6c4bee65-28e6-4f62-a2b5-b4d9227c5624","Type":"ContainerStarted","Data":"b53f792df7cff8163ee8a7592ca68143879b985452df8ad4b61543811725bc69"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.760475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfr68" event={"ID":"6c4bee65-28e6-4f62-a2b5-b4d9227c5624","Type":"ContainerStarted","Data":"f3ee909b4bcfcda6fe199a0eb7bb5f83a5693cde99ca407a1e05e7fdc864bdd9"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.772024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2bx2t" event={"ID":"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0","Type":"ContainerStarted","Data":"38dd79ef05a995974ad73195962d823416fb4b0c857e118492f50f15f1f25c17"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.775101 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerStarted","Data":"db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.778540 4869 generic.go:334] "Generic (PLEG): container finished" podID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerID="49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c" exitCode=0 Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.779590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerDied","Data":"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.785716 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bfr68" podStartSLOduration=2.785672281 podStartE2EDuration="2.785672281s" podCreationTimestamp="2026-02-02 14:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:15.780343879 +0000 UTC m=+1317.424980669" watchObservedRunningTime="2026-02-02 14:55:15.785672281 +0000 UTC m=+1317.430309051" Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.809206 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2bx2t" podStartSLOduration=3.809172062 podStartE2EDuration="3.809172062s" podCreationTimestamp="2026-02-02 14:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:15.797035212 +0000 UTC m=+1317.441671982" watchObservedRunningTime="2026-02-02 14:55:15.809172062 +0000 UTC m=+1317.453808832" Feb 02 14:55:16 crc kubenswrapper[4869]: I0202 14:55:16.716399 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:16 crc kubenswrapper[4869]: I0202 14:55:16.732922 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:17 crc kubenswrapper[4869]: I0202 14:55:17.817246 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerID="53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20" exitCode=0 Feb 02 14:55:17 crc kubenswrapper[4869]: I0202 14:55:17.817693 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20"} Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.764105 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.832076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"2ee7ad043782b76a75c638017ecf8eb737d1dae5d41ae89149f1f57042e858c0"} Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.832154 4869 scope.go:117] "RemoveContainer" containerID="247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.832429 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835701 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835760 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.836068 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.836104 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.837512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.838734 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.845280 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22" (OuterVolumeSpecName: "kube-api-access-vsd22") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "kube-api-access-vsd22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.845343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts" (OuterVolumeSpecName: "scripts") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.926730 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941830 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941879 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941891 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941923 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941932 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.981866 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.016657 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data" (OuterVolumeSpecName: "config-data") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.044031 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.044076 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.112962 4869 scope.go:117] "RemoveContainer" containerID="03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.140319 4869 scope.go:117] "RemoveContainer" containerID="53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.188261 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.219250 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.231154 4869 scope.go:117] "RemoveContainer" containerID="94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.244964 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:19 crc kubenswrapper[4869]: E0202 14:55:19.245668 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="sg-core" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245693 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="sg-core" Feb 02 14:55:19 crc kubenswrapper[4869]: E0202 14:55:19.245712 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-central-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245721 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-central-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: E0202 14:55:19.245753 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-notification-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-notification-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: E0202 14:55:19.245769 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="proxy-httpd" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245775 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="proxy-httpd" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245974 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-central-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245990 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="sg-core" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.246009 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="proxy-httpd" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.246032 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-notification-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.252686 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.257830 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.258028 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.258084 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.262096 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.352194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.353692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.353862 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354273 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354441 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354760 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.456657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.457514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.457769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458396 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458516 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458675 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.460269 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.460786 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.462135 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.462358 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.466453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.478429 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.479712 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.486176 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.496318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.498877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.511137 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.517545 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" path="/var/lib/kubelet/pods/4e20726c-76b7-41eb-a27b-3deb88fcc6f9/volumes" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.585614 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.876885 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerStarted","Data":"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.877895 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.889720 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1a29990-0400-4b85-86fe-2a00b5809576","Type":"ContainerStarted","Data":"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.889974 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" gracePeriod=30 Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.908729 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" podStartSLOduration=6.908702478 podStartE2EDuration="6.908702478s" podCreationTimestamp="2026-02-02 14:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:19.903895668 +0000 UTC m=+1321.548532438" watchObservedRunningTime="2026-02-02 14:55:19.908702478 +0000 UTC m=+1321.553339248" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.909527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerStarted","Data":"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.909572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerStarted","Data":"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.925114 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-log" containerID="cri-o://be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" gracePeriod=30 Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.925517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerStarted","Data":"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.925557 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerStarted","Data":"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.925628 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-metadata" containerID="cri-o://c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" gracePeriod=30 Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.946238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2","Type":"ContainerStarted","Data":"c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.953697 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.978755587 podStartE2EDuration="7.953662659s" podCreationTimestamp="2026-02-02 14:55:12 +0000 UTC" firstStartedPulling="2026-02-02 14:55:14.091416845 +0000 UTC m=+1315.736053615" lastFinishedPulling="2026-02-02 14:55:19.066323917 +0000 UTC m=+1320.710960687" observedRunningTime="2026-02-02 14:55:19.924347274 +0000 UTC m=+1321.568984034" watchObservedRunningTime="2026-02-02 14:55:19.953662659 +0000 UTC m=+1321.598299439" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.987642 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.600860331 podStartE2EDuration="6.987621439s" podCreationTimestamp="2026-02-02 14:55:13 +0000 UTC" firstStartedPulling="2026-02-02 14:55:14.688365906 +0000 UTC m=+1316.333002676" lastFinishedPulling="2026-02-02 14:55:19.075127004 +0000 UTC m=+1320.719763784" observedRunningTime="2026-02-02 14:55:19.95043186 +0000 UTC m=+1321.595068630" watchObservedRunningTime="2026-02-02 14:55:19.987621439 +0000 UTC m=+1321.632258209" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.008774 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.252845254 podStartE2EDuration="8.008742061s" podCreationTimestamp="2026-02-02 14:55:12 +0000 UTC" firstStartedPulling="2026-02-02 14:55:13.878726565 +0000 UTC m=+1315.523363335" lastFinishedPulling="2026-02-02 14:55:18.634623382 +0000 UTC m=+1320.279260142" observedRunningTime="2026-02-02 14:55:19.981839076 +0000 UTC m=+1321.626475856" watchObservedRunningTime="2026-02-02 14:55:20.008742061 +0000 UTC m=+1321.653378831" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.025071 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.006903052 podStartE2EDuration="7.025042525s" podCreationTimestamp="2026-02-02 14:55:13 +0000 UTC" firstStartedPulling="2026-02-02 14:55:14.619569085 +0000 UTC m=+1316.264205865" lastFinishedPulling="2026-02-02 14:55:18.637708568 +0000 UTC m=+1320.282345338" observedRunningTime="2026-02-02 14:55:20.009256935 +0000 UTC m=+1321.653893705" watchObservedRunningTime="2026-02-02 14:55:20.025042525 +0000 UTC m=+1321.669679295" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.136661 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.950290 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.958562 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"08a2d8ed761534c05fe2670f151170765676bc37409dea3bba0f77b45f9d496c"} Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961478 4869 generic.go:334] "Generic (PLEG): container finished" podID="57e664d1-4870-4eb5-8556-4418e41299eb" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" exitCode=0 Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961533 4869 generic.go:334] "Generic (PLEG): container finished" podID="57e664d1-4870-4eb5-8556-4418e41299eb" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" exitCode=143 Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerDied","Data":"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8"} Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961585 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerDied","Data":"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527"} Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerDied","Data":"5be81fda9a826f7e54ad4ca6e6d929236a63542303c28bf9d0e22fa1ebc93458"} Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961650 4869 scope.go:117] "RemoveContainer" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.962025 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.005151 4869 scope.go:117] "RemoveContainer" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.017982 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") pod \"57e664d1-4870-4eb5-8556-4418e41299eb\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.018158 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") pod \"57e664d1-4870-4eb5-8556-4418e41299eb\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.018240 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") pod \"57e664d1-4870-4eb5-8556-4418e41299eb\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.018391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") pod \"57e664d1-4870-4eb5-8556-4418e41299eb\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.018744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs" (OuterVolumeSpecName: "logs") pod "57e664d1-4870-4eb5-8556-4418e41299eb" (UID: "57e664d1-4870-4eb5-8556-4418e41299eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.020560 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.042951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds" (OuterVolumeSpecName: "kube-api-access-8n8ds") pod "57e664d1-4870-4eb5-8556-4418e41299eb" (UID: "57e664d1-4870-4eb5-8556-4418e41299eb"). InnerVolumeSpecName "kube-api-access-8n8ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.048560 4869 scope.go:117] "RemoveContainer" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" Feb 02 14:55:21 crc kubenswrapper[4869]: E0202 14:55:21.049686 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": container with ID starting with c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8 not found: ID does not exist" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.049729 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8"} err="failed to get container status \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": rpc error: code = NotFound desc = could not find container \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": container with ID starting with c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8 not found: ID does not exist" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.049759 4869 scope.go:117] "RemoveContainer" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" Feb 02 14:55:21 crc kubenswrapper[4869]: E0202 14:55:21.050214 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": container with ID starting with be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527 not found: ID does not exist" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.050241 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527"} err="failed to get container status \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": rpc error: code = NotFound desc = could not find container \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": container with ID starting with be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527 not found: ID does not exist" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.050257 4869 scope.go:117] "RemoveContainer" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.050459 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8"} err="failed to get container status \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": rpc error: code = NotFound desc = could not find container \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": container with ID starting with c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8 not found: ID does not exist" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.050484 4869 scope.go:117] "RemoveContainer" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.052560 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527"} err="failed to get container status \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": rpc error: code = NotFound desc = could not find container \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": container with ID starting with be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527 not found: ID does not exist" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.069279 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57e664d1-4870-4eb5-8556-4418e41299eb" (UID: "57e664d1-4870-4eb5-8556-4418e41299eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.095575 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data" (OuterVolumeSpecName: "config-data") pod "57e664d1-4870-4eb5-8556-4418e41299eb" (UID: "57e664d1-4870-4eb5-8556-4418e41299eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.122955 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.123000 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.123017 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.315818 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.384186 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.435719 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:21 crc kubenswrapper[4869]: E0202 14:55:21.436459 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-log" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.436500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-log" Feb 02 14:55:21 crc kubenswrapper[4869]: E0202 14:55:21.436535 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-metadata" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.436542 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-metadata" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.436791 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-metadata" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.436835 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-log" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.438576 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.441673 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.443721 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.452752 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.491048 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" path="/var/lib/kubelet/pods/57e664d1-4870-4eb5-8556-4418e41299eb/volumes" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.534545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.534675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.534823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.535088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.535179 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.637057 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.638660 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.638796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.638998 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.639072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.639694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.643425 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.643755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.644401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.659297 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.772856 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.997459 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3"} Feb 02 14:55:22 crc kubenswrapper[4869]: I0202 14:55:22.040901 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 14:55:22 crc kubenswrapper[4869]: I0202 14:55:22.383120 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.020920 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerStarted","Data":"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec"} Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.021445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerStarted","Data":"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731"} Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.021463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerStarted","Data":"22d67239dd7b49d55db153438c6a489811a47575626ce29e18944434f226cb57"} Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.029158 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79"} Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.059109 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.059083162 podStartE2EDuration="2.059083162s" podCreationTimestamp="2026-02-02 14:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:23.048477169 +0000 UTC m=+1324.693113939" watchObservedRunningTime="2026-02-02 14:55:23.059083162 +0000 UTC m=+1324.703719932" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.220376 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.232608 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.232666 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.263935 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.609724 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.609772 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:55:24 crc kubenswrapper[4869]: I0202 14:55:24.050127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02"} Feb 02 14:55:24 crc kubenswrapper[4869]: I0202 14:55:24.085422 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 14:55:24 crc kubenswrapper[4869]: I0202 14:55:24.651294 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.176:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:24 crc kubenswrapper[4869]: I0202 14:55:24.651308 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.176:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:26 crc kubenswrapper[4869]: I0202 14:55:26.070768 4869 generic.go:334] "Generic (PLEG): container finished" podID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" containerID="38dd79ef05a995974ad73195962d823416fb4b0c857e118492f50f15f1f25c17" exitCode=0 Feb 02 14:55:26 crc kubenswrapper[4869]: I0202 14:55:26.070872 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2bx2t" event={"ID":"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0","Type":"ContainerDied","Data":"38dd79ef05a995974ad73195962d823416fb4b0c857e118492f50f15f1f25c17"} Feb 02 14:55:26 crc kubenswrapper[4869]: I0202 14:55:26.773778 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:55:26 crc kubenswrapper[4869]: I0202 14:55:26.774474 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.088455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2"} Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.119852 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.693992325 podStartE2EDuration="8.119827487s" podCreationTimestamp="2026-02-02 14:55:19 +0000 UTC" firstStartedPulling="2026-02-02 14:55:20.147570754 +0000 UTC m=+1321.792207524" lastFinishedPulling="2026-02-02 14:55:26.573405916 +0000 UTC m=+1328.218042686" observedRunningTime="2026-02-02 14:55:27.110515387 +0000 UTC m=+1328.755152167" watchObservedRunningTime="2026-02-02 14:55:27.119827487 +0000 UTC m=+1328.764464257" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.581431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.688619 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") pod \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.689340 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") pod \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.689541 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") pod \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.689657 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") pod \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.694584 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m" (OuterVolumeSpecName: "kube-api-access-7xx4m") pod "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" (UID: "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0"). InnerVolumeSpecName "kube-api-access-7xx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.705949 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts" (OuterVolumeSpecName: "scripts") pod "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" (UID: "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.720063 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data" (OuterVolumeSpecName: "config-data") pod "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" (UID: "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.726351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" (UID: "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.791800 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.792359 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.792373 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.792382 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.103147 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.103303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2bx2t" event={"ID":"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0","Type":"ContainerDied","Data":"8a6758018e930eb35d181b72a0bf4424ef8cce214eee1037a29cee9e990a3ae0"} Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.103387 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a6758018e930eb35d181b72a0bf4424ef8cce214eee1037a29cee9e990a3ae0" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.104086 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.318744 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.319564 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" containerID="cri-o://a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.319531 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" containerID="cri-o://ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.336488 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.336921 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerName="nova-scheduler-scheduler" containerID="cri-o://c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.356528 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.357095 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-log" containerID="cri-o://0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.357317 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-metadata" containerID="cri-o://6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.769437 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.854658 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.855039 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="dnsmasq-dns" containerID="cri-o://c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6" gracePeriod=10 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.059231 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.122456 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123135 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123511 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs" (OuterVolumeSpecName: "logs") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.124551 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.133534 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k" (OuterVolumeSpecName: "kube-api-access-j4q5k") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "kube-api-access-j4q5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.136686 4869 generic.go:334] "Generic (PLEG): container finished" podID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" containerID="b53f792df7cff8163ee8a7592ca68143879b985452df8ad4b61543811725bc69" exitCode=0 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.136763 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfr68" event={"ID":"6c4bee65-28e6-4f62-a2b5-b4d9227c5624","Type":"ContainerDied","Data":"b53f792df7cff8163ee8a7592ca68143879b985452df8ad4b61543811725bc69"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141359 4869 generic.go:334] "Generic (PLEG): container finished" podID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" exitCode=0 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141392 4869 generic.go:334] "Generic (PLEG): container finished" podID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" exitCode=143 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerDied","Data":"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerDied","Data":"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141498 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerDied","Data":"22d67239dd7b49d55db153438c6a489811a47575626ce29e18944434f226cb57"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141515 4869 scope.go:117] "RemoveContainer" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141688 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.154660 4869 generic.go:334] "Generic (PLEG): container finished" podID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerID="c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6" exitCode=0 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.154769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerDied","Data":"c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.171204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data" (OuterVolumeSpecName: "config-data") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.174867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.182049 4869 generic.go:334] "Generic (PLEG): container finished" podID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerID="ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" exitCode=143 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.183480 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerDied","Data":"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.213823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.227560 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.227608 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.227621 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.227630 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.315384 4869 scope.go:117] "RemoveContainer" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.356811 4869 scope.go:117] "RemoveContainer" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.357606 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": container with ID starting with 6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec not found: ID does not exist" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.357643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec"} err="failed to get container status \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": rpc error: code = NotFound desc = could not find container \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": container with ID starting with 6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec not found: ID does not exist" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.357670 4869 scope.go:117] "RemoveContainer" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.358238 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": container with ID starting with 0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731 not found: ID does not exist" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.358259 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731"} err="failed to get container status \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": rpc error: code = NotFound desc = could not find container \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": container with ID starting with 0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731 not found: ID does not exist" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.358273 4869 scope.go:117] "RemoveContainer" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.358626 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec"} err="failed to get container status \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": rpc error: code = NotFound desc = could not find container \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": container with ID starting with 6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec not found: ID does not exist" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.358649 4869 scope.go:117] "RemoveContainer" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.359011 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731"} err="failed to get container status \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": rpc error: code = NotFound desc = could not find container \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": container with ID starting with 0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731 not found: ID does not exist" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.431812 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.523976 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.534617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.534855 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.534897 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.535096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.535142 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.573770 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.590348 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv" (OuterVolumeSpecName: "kube-api-access-q4pvv") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "kube-api-access-q4pvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.607669 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617152 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="init" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617199 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="init" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617227 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="dnsmasq-dns" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617237 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="dnsmasq-dns" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617278 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-log" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617287 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-log" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617301 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" containerName="nova-manage" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617308 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" containerName="nova-manage" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617340 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-metadata" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617347 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-metadata" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617987 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-log" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.618021 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" containerName="nova-manage" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.618038 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-metadata" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.620253 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="dnsmasq-dns" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.624254 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.628070 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.629762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.631432 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.638984 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.642258 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.666309 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.674456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.674628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.674901 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675115 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675361 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675843 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675882 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675909 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675944 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.678191 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config" (OuterVolumeSpecName: "config") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778644 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778732 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.779232 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.784113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.785511 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.799383 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.802302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.953531 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.201114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerDied","Data":"3aa5c96598f9d84b8ea60ab2f8542911baacbe20302c3b591676275481c40de5"} Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.201200 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.201582 4869 scope.go:117] "RemoveContainer" containerID="c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.235413 4869 scope.go:117] "RemoveContainer" containerID="e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.263767 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.273148 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.485116 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.620726 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.717886 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") pod \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.717997 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") pod \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.718057 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") pod \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.718216 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") pod \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.726138 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx" (OuterVolumeSpecName: "kube-api-access-z8ndx") pod "6c4bee65-28e6-4f62-a2b5-b4d9227c5624" (UID: "6c4bee65-28e6-4f62-a2b5-b4d9227c5624"). InnerVolumeSpecName "kube-api-access-z8ndx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.726817 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts" (OuterVolumeSpecName: "scripts") pod "6c4bee65-28e6-4f62-a2b5-b4d9227c5624" (UID: "6c4bee65-28e6-4f62-a2b5-b4d9227c5624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.748254 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c4bee65-28e6-4f62-a2b5-b4d9227c5624" (UID: "6c4bee65-28e6-4f62-a2b5-b4d9227c5624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.758383 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data" (OuterVolumeSpecName: "config-data") pod "6c4bee65-28e6-4f62-a2b5-b4d9227c5624" (UID: "6c4bee65-28e6-4f62-a2b5-b4d9227c5624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.820526 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.820607 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.820624 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.820638 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.215541 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerStarted","Data":"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f"} Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.216092 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerStarted","Data":"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd"} Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.216110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerStarted","Data":"f2995f40ac54472f74017bd157579158e7b1849e936f0eca8f4970077675a29d"} Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.218906 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.219118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfr68" event={"ID":"6c4bee65-28e6-4f62-a2b5-b4d9227c5624","Type":"ContainerDied","Data":"f3ee909b4bcfcda6fe199a0eb7bb5f83a5693cde99ca407a1e05e7fdc864bdd9"} Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.219165 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ee909b4bcfcda6fe199a0eb7bb5f83a5693cde99ca407a1e05e7fdc864bdd9" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.249120 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.249087997 podStartE2EDuration="2.249087997s" podCreationTimestamp="2026-02-02 14:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:31.241172962 +0000 UTC m=+1332.885809732" watchObservedRunningTime="2026-02-02 14:55:31.249087997 +0000 UTC m=+1332.893724767" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.277801 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 14:55:31 crc kubenswrapper[4869]: E0202 14:55:31.278427 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" containerName="nova-cell1-conductor-db-sync" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.278449 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" containerName="nova-cell1-conductor-db-sync" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.278702 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" containerName="nova-cell1-conductor-db-sync" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.279454 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.282287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.300859 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.330415 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.330519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.330674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ggg\" (UniqueName: \"kubernetes.io/projected/7ed5d945-0024-455d-a2d4-c8724693b402-kube-api-access-82ggg\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.432501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ggg\" (UniqueName: \"kubernetes.io/projected/7ed5d945-0024-455d-a2d4-c8724693b402-kube-api-access-82ggg\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.432937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.433071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.439135 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.439300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.457562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ggg\" (UniqueName: \"kubernetes.io/projected/7ed5d945-0024-455d-a2d4-c8724693b402-kube-api-access-82ggg\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.481345 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" path="/var/lib/kubelet/pods/3c0c79bc-79ef-4876-b621-25ff976ecad2/volumes" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.483367 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" path="/var/lib/kubelet/pods/7e32e648-8194-4d43-8d61-820b72b8d1b4/volumes" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.604700 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.191591 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.211214 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.244226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7ed5d945-0024-455d-a2d4-c8724693b402","Type":"ContainerStarted","Data":"d2d62b29a7011784afde2cc529b97e434fdf493a41bc3707e0e5c6d3927f9b46"} Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.247262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") pod \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.247603 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") pod \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.247682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") pod \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.247719 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") pod \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.248041 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs" (OuterVolumeSpecName: "logs") pod "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" (UID: "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.248251 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.253303 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9" (OuterVolumeSpecName: "kube-api-access-gfqb9") pod "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" (UID: "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5"). InnerVolumeSpecName "kube-api-access-gfqb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.254184 4869 generic.go:334] "Generic (PLEG): container finished" podID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerID="c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb" exitCode=0 Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.254269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2","Type":"ContainerDied","Data":"c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb"} Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.257697 4869 generic.go:334] "Generic (PLEG): container finished" podID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerID="a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" exitCode=0 Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.258182 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.258159 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerDied","Data":"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65"} Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.258311 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerDied","Data":"db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7"} Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.258368 4869 scope.go:117] "RemoveContainer" containerID="a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.298063 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data" (OuterVolumeSpecName: "config-data") pod "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" (UID: "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.300092 4869 scope.go:117] "RemoveContainer" containerID="ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.311783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" (UID: "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.342774 4869 scope.go:117] "RemoveContainer" containerID="a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.344616 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65\": container with ID starting with a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65 not found: ID does not exist" containerID="a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.344654 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65"} err="failed to get container status \"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65\": rpc error: code = NotFound desc = could not find container \"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65\": container with ID starting with a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65 not found: ID does not exist" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.344683 4869 scope.go:117] "RemoveContainer" containerID="ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.345167 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec\": container with ID starting with ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec not found: ID does not exist" containerID="ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.345233 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec"} err="failed to get container status \"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec\": rpc error: code = NotFound desc = could not find container \"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec\": container with ID starting with ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec not found: ID does not exist" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.351608 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.351643 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.351657 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.377904 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.452859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") pod \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.453371 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") pod \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.453607 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") pod \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.460960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n" (OuterVolumeSpecName: "kube-api-access-6ph7n") pod "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" (UID: "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2"). InnerVolumeSpecName "kube-api-access-6ph7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.490133 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data" (OuterVolumeSpecName: "config-data") pod "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" (UID: "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.494386 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" (UID: "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.560949 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.561657 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.561739 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.609808 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.622802 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.645715 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.646523 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerName="nova-scheduler-scheduler" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646547 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerName="nova-scheduler-scheduler" Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.646586 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.646611 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646618 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646899 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerName="nova-scheduler-scheduler" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646933 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646946 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.649706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.652504 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.659142 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.663470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.663553 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.663586 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.663717 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.767414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.767839 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.767954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.768137 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.769805 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.773836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.774715 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.795629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.969334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.283241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7ed5d945-0024-455d-a2d4-c8724693b402","Type":"ContainerStarted","Data":"4dfa4e7c32f6380a95107b356bceeaebce3c44c96e6ee5973777cd176b675abb"} Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.283731 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.287234 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2","Type":"ContainerDied","Data":"51ac651ddd93f893e6d3273b647d0ad831e6db906a9c89298fdc003ced36fdc1"} Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.287306 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.287331 4869 scope.go:117] "RemoveContainer" containerID="c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.306834 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.306808202 podStartE2EDuration="2.306808202s" podCreationTimestamp="2026-02-02 14:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:33.30352521 +0000 UTC m=+1334.948161990" watchObservedRunningTime="2026-02-02 14:55:33.306808202 +0000 UTC m=+1334.951444972" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.352464 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.363722 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.377971 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.379483 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.388830 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.399169 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.461031 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.471851 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" path="/var/lib/kubelet/pods/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2/volumes" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.472780 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" path="/var/lib/kubelet/pods/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5/volumes" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.488361 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.488827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.489018 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.591022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.591103 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.591249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.600304 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.600662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.611273 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.714514 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.218633 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.302074 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"719e20f4-473b-4859-8730-d15fe8c662aa","Type":"ContainerStarted","Data":"ad2b09060cc90b2b66052da409b095c5c7bf4ff33b856487d4aab5822df918b3"} Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.308864 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerStarted","Data":"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc"} Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.308972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerStarted","Data":"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194"} Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.308994 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerStarted","Data":"992e8673264eb1425686bfadfad4e661653112c95495432e701a166b56edfaa7"} Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.347492 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.347462915 podStartE2EDuration="2.347462915s" podCreationTimestamp="2026-02-02 14:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:34.334380102 +0000 UTC m=+1335.979016892" watchObservedRunningTime="2026-02-02 14:55:34.347462915 +0000 UTC m=+1335.992099685" Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.966998 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.967419 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:55:35 crc kubenswrapper[4869]: I0202 14:55:35.329983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"719e20f4-473b-4859-8730-d15fe8c662aa","Type":"ContainerStarted","Data":"38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88"} Feb 02 14:55:35 crc kubenswrapper[4869]: I0202 14:55:35.349193 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.349121575 podStartE2EDuration="2.349121575s" podCreationTimestamp="2026-02-02 14:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:35.345772012 +0000 UTC m=+1336.990408782" watchObservedRunningTime="2026-02-02 14:55:35.349121575 +0000 UTC m=+1336.993758345" Feb 02 14:55:38 crc kubenswrapper[4869]: I0202 14:55:38.716010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 14:55:39 crc kubenswrapper[4869]: I0202 14:55:39.953931 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 14:55:39 crc kubenswrapper[4869]: I0202 14:55:39.954010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 14:55:40 crc kubenswrapper[4869]: I0202 14:55:40.974262 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:40 crc kubenswrapper[4869]: I0202 14:55:40.977657 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:41 crc kubenswrapper[4869]: I0202 14:55:41.634391 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:42 crc kubenswrapper[4869]: I0202 14:55:42.970747 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:55:42 crc kubenswrapper[4869]: I0202 14:55:42.970865 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:55:43 crc kubenswrapper[4869]: I0202 14:55:43.716138 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 14:55:43 crc kubenswrapper[4869]: I0202 14:55:43.753685 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 14:55:44 crc kubenswrapper[4869]: I0202 14:55:44.073345 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:44 crc kubenswrapper[4869]: I0202 14:55:44.073345 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:44 crc kubenswrapper[4869]: I0202 14:55:44.452480 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 14:55:45 crc kubenswrapper[4869]: I0202 14:55:45.304289 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:55:45 crc kubenswrapper[4869]: I0202 14:55:45.304364 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.597400 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.966752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.967619 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.975247 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.976041 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.392864 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.486875 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1a29990-0400-4b85-86fe-2a00b5809576" containerID="8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" exitCode=137 Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.486939 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.486947 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1a29990-0400-4b85-86fe-2a00b5809576","Type":"ContainerDied","Data":"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838"} Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.488690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1a29990-0400-4b85-86fe-2a00b5809576","Type":"ContainerDied","Data":"0f50f5a7419043a9c8e4096aa4798378e9fbf6f1d58cf6115d2fbee8f617e5fe"} Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.488737 4869 scope.go:117] "RemoveContainer" containerID="8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.512624 4869 scope.go:117] "RemoveContainer" containerID="8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" Feb 02 14:55:50 crc kubenswrapper[4869]: E0202 14:55:50.514809 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838\": container with ID starting with 8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838 not found: ID does not exist" containerID="8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.514882 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838"} err="failed to get container status \"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838\": rpc error: code = NotFound desc = could not find container \"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838\": container with ID starting with 8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838 not found: ID does not exist" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.522601 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") pod \"d1a29990-0400-4b85-86fe-2a00b5809576\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.522657 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") pod \"d1a29990-0400-4b85-86fe-2a00b5809576\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.522903 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") pod \"d1a29990-0400-4b85-86fe-2a00b5809576\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.531687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52" (OuterVolumeSpecName: "kube-api-access-h4f52") pod "d1a29990-0400-4b85-86fe-2a00b5809576" (UID: "d1a29990-0400-4b85-86fe-2a00b5809576"). InnerVolumeSpecName "kube-api-access-h4f52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.557155 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1a29990-0400-4b85-86fe-2a00b5809576" (UID: "d1a29990-0400-4b85-86fe-2a00b5809576"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.565054 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data" (OuterVolumeSpecName: "config-data") pod "d1a29990-0400-4b85-86fe-2a00b5809576" (UID: "d1a29990-0400-4b85-86fe-2a00b5809576"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.626193 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.626623 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.626639 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.823372 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.834090 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.905025 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:50 crc kubenswrapper[4869]: E0202 14:55:50.906109 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.906128 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.906481 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.907523 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.918246 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.918533 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.919949 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.938205 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037091 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmkm\" (UniqueName: \"kubernetes.io/projected/127a427f-66a5-4d07-ac48-aea0da95d425-kube-api-access-pdmkm\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037141 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037245 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037328 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.139938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.139985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmkm\" (UniqueName: \"kubernetes.io/projected/127a427f-66a5-4d07-ac48-aea0da95d425-kube-api-access-pdmkm\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.140023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.140054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.140112 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.146629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.146831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.151602 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.151631 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.172116 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmkm\" (UniqueName: \"kubernetes.io/projected/127a427f-66a5-4d07-ac48-aea0da95d425-kube-api-access-pdmkm\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.245441 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.474794 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" path="/var/lib/kubelet/pods/d1a29990-0400-4b85-86fe-2a00b5809576/volumes" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.716890 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:51 crc kubenswrapper[4869]: W0202 14:55:51.720114 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127a427f_66a5_4d07_ac48_aea0da95d425.slice/crio-3e986d38a2e64afa01833281a0c5f13c686f075f7adce6049ac539a324116c67 WatchSource:0}: Error finding container 3e986d38a2e64afa01833281a0c5f13c686f075f7adce6049ac539a324116c67: Status 404 returned error can't find the container with id 3e986d38a2e64afa01833281a0c5f13c686f075f7adce6049ac539a324116c67 Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.513791 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"127a427f-66a5-4d07-ac48-aea0da95d425","Type":"ContainerStarted","Data":"57f86155facf843e6551718f2f10381aae1b22f7d747e0f4415087f5a3853807"} Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.514694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"127a427f-66a5-4d07-ac48-aea0da95d425","Type":"ContainerStarted","Data":"3e986d38a2e64afa01833281a0c5f13c686f075f7adce6049ac539a324116c67"} Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.548493 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.548460409 podStartE2EDuration="2.548460409s" podCreationTimestamp="2026-02-02 14:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:52.539447946 +0000 UTC m=+1354.184084736" watchObservedRunningTime="2026-02-02 14:55:52.548460409 +0000 UTC m=+1354.193097179" Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.975072 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.977593 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.978924 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.983085 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.525881 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.530338 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.722772 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.724848 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.746998 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804448 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804596 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910106 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910447 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.912889 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.913984 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.914224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.915007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.938230 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:54 crc kubenswrapper[4869]: I0202 14:55:54.060677 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:54 crc kubenswrapper[4869]: I0202 14:55:54.814322 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:55:55 crc kubenswrapper[4869]: I0202 14:55:55.570716 4869 generic.go:334] "Generic (PLEG): container finished" podID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerID="8cf856a4df374f3980cbc2ddc8eb1618f3c5e7b2fc6a969f06245cd19d267eb6" exitCode=0 Feb 02 14:55:55 crc kubenswrapper[4869]: I0202 14:55:55.570806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerDied","Data":"8cf856a4df374f3980cbc2ddc8eb1618f3c5e7b2fc6a969f06245cd19d267eb6"} Feb 02 14:55:55 crc kubenswrapper[4869]: I0202 14:55:55.571479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerStarted","Data":"b0a192cf90b2c34b440565bf71d8167abd947c406c2ba5f06b41ea7ba562f653"} Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.245857 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.392386 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.392701 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-central-agent" containerID="cri-o://e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.392804 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-notification-agent" containerID="cri-o://daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.392824 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="sg-core" containerID="cri-o://a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.393385 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="proxy-httpd" containerID="cri-o://33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.587144 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerStarted","Data":"498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7"} Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.588276 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.596454 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f07b304-b006-4eff-abbe-632939ffb20c" containerID="33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" exitCode=0 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.596496 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f07b304-b006-4eff-abbe-632939ffb20c" containerID="a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" exitCode=2 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.596523 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2"} Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.596555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02"} Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.630106 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" podStartSLOduration=3.630072902 podStartE2EDuration="3.630072902s" podCreationTimestamp="2026-02-02 14:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:56.617700835 +0000 UTC m=+1358.262337645" watchObservedRunningTime="2026-02-02 14:55:56.630072902 +0000 UTC m=+1358.274709672" Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.640966 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.641216 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" containerID="cri-o://5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.641378 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" containerID="cri-o://5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" gracePeriod=30 Feb 02 14:55:57 crc kubenswrapper[4869]: I0202 14:55:57.609210 4869 generic.go:334] "Generic (PLEG): container finished" podID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerID="5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" exitCode=143 Feb 02 14:55:57 crc kubenswrapper[4869]: I0202 14:55:57.609297 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerDied","Data":"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194"} Feb 02 14:55:57 crc kubenswrapper[4869]: I0202 14:55:57.612192 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f07b304-b006-4eff-abbe-632939ffb20c" containerID="e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" exitCode=0 Feb 02 14:55:57 crc kubenswrapper[4869]: I0202 14:55:57.612258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.258194 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.269753 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376417 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376530 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") pod \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376594 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376731 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376773 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376817 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") pod \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376841 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376968 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376990 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.377020 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") pod \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.377217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") pod \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.377862 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs" (OuterVolumeSpecName: "logs") pod "4b807d4b-0c84-4300-bdc8-997bd3fc4293" (UID: "4b807d4b-0c84-4300-bdc8-997bd3fc4293"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.378165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.379014 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.379042 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.379053 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.385300 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv" (OuterVolumeSpecName: "kube-api-access-f2lcv") pod "4b807d4b-0c84-4300-bdc8-997bd3fc4293" (UID: "4b807d4b-0c84-4300-bdc8-997bd3fc4293"). InnerVolumeSpecName "kube-api-access-f2lcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.387100 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg" (OuterVolumeSpecName: "kube-api-access-ts7bg") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "kube-api-access-ts7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.387293 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts" (OuterVolumeSpecName: "scripts") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.422284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data" (OuterVolumeSpecName: "config-data") pod "4b807d4b-0c84-4300-bdc8-997bd3fc4293" (UID: "4b807d4b-0c84-4300-bdc8-997bd3fc4293"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.455307 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b807d4b-0c84-4300-bdc8-997bd3fc4293" (UID: "4b807d4b-0c84-4300-bdc8-997bd3fc4293"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.463131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.466953 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481695 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481748 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481759 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481773 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481784 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481794 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481805 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.484148 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.540476 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data" (OuterVolumeSpecName: "config-data") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.583967 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.584436 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643694 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f07b304-b006-4eff-abbe-632939ffb20c" containerID="daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" exitCode=0 Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643822 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"08a2d8ed761534c05fe2670f151170765676bc37409dea3bba0f77b45f9d496c"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643845 4869 scope.go:117] "RemoveContainer" containerID="33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643844 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.647216 4869 generic.go:334] "Generic (PLEG): container finished" podID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerID="5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" exitCode=0 Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.647279 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerDied","Data":"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.647316 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerDied","Data":"992e8673264eb1425686bfadfad4e661653112c95495432e701a166b56edfaa7"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.647393 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.671213 4869 scope.go:117] "RemoveContainer" containerID="a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.705997 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.709339 4869 scope.go:117] "RemoveContainer" containerID="daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.734615 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.741053 4869 scope.go:117] "RemoveContainer" containerID="e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.788791 4869 scope.go:117] "RemoveContainer" containerID="33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.789242 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.790169 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2\": container with ID starting with 33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2 not found: ID does not exist" containerID="33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.790224 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2"} err="failed to get container status \"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2\": rpc error: code = NotFound desc = could not find container \"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2\": container with ID starting with 33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.790262 4869 scope.go:117] "RemoveContainer" containerID="a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.790841 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02\": container with ID starting with a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02 not found: ID does not exist" containerID="a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.791263 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02"} err="failed to get container status \"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02\": rpc error: code = NotFound desc = could not find container \"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02\": container with ID starting with a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.791524 4869 scope.go:117] "RemoveContainer" containerID="daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.791962 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79\": container with ID starting with daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79 not found: ID does not exist" containerID="daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.792100 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79"} err="failed to get container status \"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79\": rpc error: code = NotFound desc = could not find container \"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79\": container with ID starting with daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.792206 4869 scope.go:117] "RemoveContainer" containerID="e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.793302 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3\": container with ID starting with e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3 not found: ID does not exist" containerID="e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.793340 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3"} err="failed to get container status \"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3\": rpc error: code = NotFound desc = could not find container \"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3\": container with ID starting with e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.793363 4869 scope.go:117] "RemoveContainer" containerID="5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.809207 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.816565 4869 scope.go:117] "RemoveContainer" containerID="5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823078 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823496 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-central-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823517 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-central-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823530 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-notification-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823536 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-notification-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823552 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="proxy-httpd" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823558 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="proxy-httpd" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823569 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823577 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823586 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823593 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823607 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="sg-core" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823613 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="sg-core" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823829 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-central-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823848 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="sg-core" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823856 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="proxy-httpd" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823863 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823870 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-notification-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823879 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.825611 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.831864 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.832217 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.832960 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.860543 4869 scope.go:117] "RemoveContainer" containerID="5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.861048 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.862558 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc\": container with ID starting with 5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc not found: ID does not exist" containerID="5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.862594 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc"} err="failed to get container status \"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc\": rpc error: code = NotFound desc = could not find container \"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc\": container with ID starting with 5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.862623 4869 scope.go:117] "RemoveContainer" containerID="5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.863578 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194\": container with ID starting with 5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194 not found: ID does not exist" containerID="5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.863605 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194"} err="failed to get container status \"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194\": rpc error: code = NotFound desc = could not find container \"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194\": container with ID starting with 5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.871856 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.873931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.877515 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.877953 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.878130 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.890572 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.904389 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.904505 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.904542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.904578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.905005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.905567 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.905871 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.905970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008283 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008668 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008695 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008716 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008739 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008776 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009166 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.010260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.015110 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.015169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.017164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.017176 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.018158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.028022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112267 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.113302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.117277 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.117550 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.121669 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.124365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.135832 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.148579 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.251006 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.251123 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.295431 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.474601 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" path="/var/lib/kubelet/pods/4b807d4b-0c84-4300-bdc8-997bd3fc4293/volumes" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.475985 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" path="/var/lib/kubelet/pods/8f07b304-b006-4eff-abbe-632939ffb20c/volumes" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.683670 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.714272 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.830312 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:01 crc kubenswrapper[4869]: W0202 14:56:01.833008 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc96f1eaa_fe0c_4111_9ee0_21d067b0d1aa.slice/crio-a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26 WatchSource:0}: Error finding container a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26: Status 404 returned error can't find the container with id a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26 Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.931508 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.933318 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.936204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.936410 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.972207 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.050640 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.050709 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.050758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.051397 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.155737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.157943 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.158076 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.158198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.164568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.164610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.164962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.181953 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.293780 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.683163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.683670 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"0796932bd84ec076e7335a7406319502760ed8351d5e889f11c65dc928821a28"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.688443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerStarted","Data":"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.688517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerStarted","Data":"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.688535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerStarted","Data":"a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.726126 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7260985399999997 podStartE2EDuration="2.72609854s" podCreationTimestamp="2026-02-02 14:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:02.713590653 +0000 UTC m=+1364.358227433" watchObservedRunningTime="2026-02-02 14:56:02.72609854 +0000 UTC m=+1364.370735310" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.879146 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 14:56:02 crc kubenswrapper[4869]: W0202 14:56:02.895612 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e3908c6_0f4b_4b27_8f07_9851e54d845b.slice/crio-def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571 WatchSource:0}: Error finding container def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571: Status 404 returned error can't find the container with id def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571 Feb 02 14:56:03 crc kubenswrapper[4869]: I0202 14:56:03.722206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc"} Feb 02 14:56:03 crc kubenswrapper[4869]: I0202 14:56:03.729083 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4296x" event={"ID":"3e3908c6-0f4b-4b27-8f07-9851e54d845b","Type":"ContainerStarted","Data":"b0971dd6da0e21634706adc3fb0385fe86a85a8749020d44d9b581485a18729f"} Feb 02 14:56:03 crc kubenswrapper[4869]: I0202 14:56:03.729154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4296x" event={"ID":"3e3908c6-0f4b-4b27-8f07-9851e54d845b","Type":"ContainerStarted","Data":"def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571"} Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.062070 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.096365 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4296x" podStartSLOduration=3.096332299 podStartE2EDuration="3.096332299s" podCreationTimestamp="2026-02-02 14:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:03.752995213 +0000 UTC m=+1365.397631993" watchObservedRunningTime="2026-02-02 14:56:04.096332299 +0000 UTC m=+1365.740969079" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.162750 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.163193 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="dnsmasq-dns" containerID="cri-o://3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" gracePeriod=10 Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.748307 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.749624 4869 generic.go:334] "Generic (PLEG): container finished" podID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerID="3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" exitCode=0 Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.749725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerDied","Data":"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf"} Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.749762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerDied","Data":"9e1c8170bbe27458021229751e306804c8d9eb43efb07049fd479764776f395c"} Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.749787 4869 scope.go:117] "RemoveContainer" containerID="3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.766059 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4"} Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.779629 4869 scope.go:117] "RemoveContainer" containerID="49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.834242 4869 scope.go:117] "RemoveContainer" containerID="3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" Feb 02 14:56:04 crc kubenswrapper[4869]: E0202 14:56:04.834721 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf\": container with ID starting with 3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf not found: ID does not exist" containerID="3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.834788 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf"} err="failed to get container status \"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf\": rpc error: code = NotFound desc = could not find container \"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf\": container with ID starting with 3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf not found: ID does not exist" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.834852 4869 scope.go:117] "RemoveContainer" containerID="49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c" Feb 02 14:56:04 crc kubenswrapper[4869]: E0202 14:56:04.835290 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c\": container with ID starting with 49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c not found: ID does not exist" containerID="49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.835330 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c"} err="failed to get container status \"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c\": rpc error: code = NotFound desc = could not find container \"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c\": container with ID starting with 49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c not found: ID does not exist" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.872669 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.872820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.872900 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.873088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.873155 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.884766 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4" (OuterVolumeSpecName: "kube-api-access-wkrl4") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "kube-api-access-wkrl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.929793 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.938667 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config" (OuterVolumeSpecName: "config") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.938723 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.954007 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.975943 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.975990 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.976004 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.976019 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.976032 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:05 crc kubenswrapper[4869]: I0202 14:56:05.776598 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:56:05 crc kubenswrapper[4869]: I0202 14:56:05.815681 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:56:05 crc kubenswrapper[4869]: I0202 14:56:05.826210 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:56:07 crc kubenswrapper[4869]: I0202 14:56:07.476357 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" path="/var/lib/kubelet/pods/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7/volumes" Feb 02 14:56:07 crc kubenswrapper[4869]: I0202 14:56:07.803804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e"} Feb 02 14:56:07 crc kubenswrapper[4869]: I0202 14:56:07.805225 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:56:07 crc kubenswrapper[4869]: I0202 14:56:07.838744 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.927509112 podStartE2EDuration="7.838710471s" podCreationTimestamp="2026-02-02 14:56:00 +0000 UTC" firstStartedPulling="2026-02-02 14:56:01.713699761 +0000 UTC m=+1363.358336531" lastFinishedPulling="2026-02-02 14:56:06.62490112 +0000 UTC m=+1368.269537890" observedRunningTime="2026-02-02 14:56:07.835129382 +0000 UTC m=+1369.479766172" watchObservedRunningTime="2026-02-02 14:56:07.838710471 +0000 UTC m=+1369.483347251" Feb 02 14:56:08 crc kubenswrapper[4869]: I0202 14:56:08.816130 4869 generic.go:334] "Generic (PLEG): container finished" podID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" containerID="b0971dd6da0e21634706adc3fb0385fe86a85a8749020d44d9b581485a18729f" exitCode=0 Feb 02 14:56:08 crc kubenswrapper[4869]: I0202 14:56:08.816239 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4296x" event={"ID":"3e3908c6-0f4b-4b27-8f07-9851e54d845b","Type":"ContainerDied","Data":"b0971dd6da0e21634706adc3fb0385fe86a85a8749020d44d9b581485a18729f"} Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.203252 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.307902 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") pod \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.308294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") pod \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.308502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") pod \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.308560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") pod \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.327838 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg" (OuterVolumeSpecName: "kube-api-access-lx9cg") pod "3e3908c6-0f4b-4b27-8f07-9851e54d845b" (UID: "3e3908c6-0f4b-4b27-8f07-9851e54d845b"). InnerVolumeSpecName "kube-api-access-lx9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.329027 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts" (OuterVolumeSpecName: "scripts") pod "3e3908c6-0f4b-4b27-8f07-9851e54d845b" (UID: "3e3908c6-0f4b-4b27-8f07-9851e54d845b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.342658 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e3908c6-0f4b-4b27-8f07-9851e54d845b" (UID: "3e3908c6-0f4b-4b27-8f07-9851e54d845b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.343371 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data" (OuterVolumeSpecName: "config-data") pod "3e3908c6-0f4b-4b27-8f07-9851e54d845b" (UID: "3e3908c6-0f4b-4b27-8f07-9851e54d845b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.413864 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.413922 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.413936 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.413946 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.839212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4296x" event={"ID":"3e3908c6-0f4b-4b27-8f07-9851e54d845b","Type":"ContainerDied","Data":"def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571"} Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.839636 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.839776 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.039723 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.040068 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-log" containerID="cri-o://c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.040293 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-api" containerID="cri-o://bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.055972 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.056293 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" containerName="nova-scheduler-scheduler" containerID="cri-o://38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.117448 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.118209 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" containerID="cri-o://00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.118451 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" containerID="cri-o://060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.732177 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767371 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767469 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.768026 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs" (OuterVolumeSpecName: "logs") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.777426 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq" (OuterVolumeSpecName: "kube-api-access-qnprq") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "kube-api-access-qnprq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.803381 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.811858 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data" (OuterVolumeSpecName: "config-data") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864106 4869 generic.go:334] "Generic (PLEG): container finished" podID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" exitCode=0 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864138 4869 generic.go:334] "Generic (PLEG): container finished" podID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" exitCode=143 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerDied","Data":"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49"} Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerDied","Data":"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2"} Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864246 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerDied","Data":"a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26"} Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864265 4869 scope.go:117] "RemoveContainer" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.869211 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.869648 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.869752 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.869839 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.870446 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.871293 4869 generic.go:334] "Generic (PLEG): container finished" podID="19de8d9b-333e-4132-9b20-35258b84e935" containerID="00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" exitCode=143 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.871371 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerDied","Data":"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd"} Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.885025 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.890046 4869 scope.go:117] "RemoveContainer" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.915300 4869 scope.go:117] "RemoveContainer" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" Feb 02 14:56:11 crc kubenswrapper[4869]: E0202 14:56:11.917163 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": container with ID starting with bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49 not found: ID does not exist" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917206 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49"} err="failed to get container status \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": rpc error: code = NotFound desc = could not find container \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": container with ID starting with bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49 not found: ID does not exist" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917239 4869 scope.go:117] "RemoveContainer" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" Feb 02 14:56:11 crc kubenswrapper[4869]: E0202 14:56:11.917501 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": container with ID starting with c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2 not found: ID does not exist" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917525 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2"} err="failed to get container status \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": rpc error: code = NotFound desc = could not find container \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": container with ID starting with c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2 not found: ID does not exist" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917543 4869 scope.go:117] "RemoveContainer" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917785 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49"} err="failed to get container status \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": rpc error: code = NotFound desc = could not find container \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": container with ID starting with bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49 not found: ID does not exist" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917806 4869 scope.go:117] "RemoveContainer" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.918221 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2"} err="failed to get container status \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": rpc error: code = NotFound desc = could not find container \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": container with ID starting with c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2 not found: ID does not exist" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.970641 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.970677 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.240611 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.251633 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278022 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278571 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-log" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278596 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-log" Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278615 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="dnsmasq-dns" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278624 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="dnsmasq-dns" Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278636 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-api" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278650 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-api" Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278671 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" containerName="nova-manage" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278678 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" containerName="nova-manage" Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278775 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="init" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278786 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="init" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.279018 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" containerName="nova-manage" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.279042 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-api" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.279057 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="dnsmasq-dns" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.279081 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-log" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.296169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.298964 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.299441 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.305783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.309655 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378192 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-config-data\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-logs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378257 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgvnw\" (UniqueName: \"kubernetes.io/projected/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-kube-api-access-mgvnw\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378555 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480429 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480544 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-config-data\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-logs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgvnw\" (UniqueName: \"kubernetes.io/projected/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-kube-api-access-mgvnw\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480701 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.481317 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-logs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.484965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.485077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.487561 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.488138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-config-data\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.499482 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgvnw\" (UniqueName: \"kubernetes.io/projected/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-kube-api-access-mgvnw\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.631538 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.898849 4869 generic.go:334] "Generic (PLEG): container finished" podID="719e20f4-473b-4859-8730-d15fe8c662aa" containerID="38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88" exitCode=0 Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.899135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"719e20f4-473b-4859-8730-d15fe8c662aa","Type":"ContainerDied","Data":"38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.131816 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:13 crc kubenswrapper[4869]: W0202 14:56:13.136662 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f2e77f7_6ccb_4992_8292_e69f277dc8f2.slice/crio-f35084fee3f102ede55274efd398f7bd7d694b304fef98e53faf654a765ec878 WatchSource:0}: Error finding container f35084fee3f102ede55274efd398f7bd7d694b304fef98e53faf654a765ec878: Status 404 returned error can't find the container with id f35084fee3f102ede55274efd398f7bd7d694b304fef98e53faf654a765ec878 Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.280627 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.404200 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") pod \"719e20f4-473b-4859-8730-d15fe8c662aa\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.404270 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") pod \"719e20f4-473b-4859-8730-d15fe8c662aa\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.404358 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") pod \"719e20f4-473b-4859-8730-d15fe8c662aa\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.408996 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p" (OuterVolumeSpecName: "kube-api-access-d7t4p") pod "719e20f4-473b-4859-8730-d15fe8c662aa" (UID: "719e20f4-473b-4859-8730-d15fe8c662aa"). InnerVolumeSpecName "kube-api-access-d7t4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.438986 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "719e20f4-473b-4859-8730-d15fe8c662aa" (UID: "719e20f4-473b-4859-8730-d15fe8c662aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.442384 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data" (OuterVolumeSpecName: "config-data") pod "719e20f4-473b-4859-8730-d15fe8c662aa" (UID: "719e20f4-473b-4859-8730-d15fe8c662aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.506601 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.506638 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.506653 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.511424 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" path="/var/lib/kubelet/pods/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa/volumes" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.920548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f2e77f7-6ccb-4992-8292-e69f277dc8f2","Type":"ContainerStarted","Data":"3e7dd1a52bd7442cf06499e0562d1c21586e6fd515cec10ecef1c409c3e41eeb"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.920630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f2e77f7-6ccb-4992-8292-e69f277dc8f2","Type":"ContainerStarted","Data":"00d4cc7404af22df7fd841747b98d88cef413f17a55995e3c395a6791d71c4d5"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.920642 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f2e77f7-6ccb-4992-8292-e69f277dc8f2","Type":"ContainerStarted","Data":"f35084fee3f102ede55274efd398f7bd7d694b304fef98e53faf654a765ec878"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.925288 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"719e20f4-473b-4859-8730-d15fe8c662aa","Type":"ContainerDied","Data":"ad2b09060cc90b2b66052da409b095c5c7bf4ff33b856487d4aab5822df918b3"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.925357 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.925403 4869 scope.go:117] "RemoveContainer" containerID="38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.950440 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.950417858 podStartE2EDuration="1.950417858s" podCreationTimestamp="2026-02-02 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:13.941832976 +0000 UTC m=+1375.586469766" watchObservedRunningTime="2026-02-02 14:56:13.950417858 +0000 UTC m=+1375.595054628" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.973901 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.996699 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.006357 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:14 crc kubenswrapper[4869]: E0202 14:56:14.006978 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" containerName="nova-scheduler-scheduler" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.006999 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" containerName="nova-scheduler-scheduler" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.007194 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" containerName="nova-scheduler-scheduler" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.008102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.013471 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.022090 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-config-data\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.022147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhv4\" (UniqueName: \"kubernetes.io/projected/46796adc-7f57-405f-bb4c-a2ccb79153f2-kube-api-access-grhv4\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.022226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.026950 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.124599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.125101 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-config-data\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.125228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhv4\" (UniqueName: \"kubernetes.io/projected/46796adc-7f57-405f-bb4c-a2ccb79153f2-kube-api-access-grhv4\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.132432 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.132847 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-config-data\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.145747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhv4\" (UniqueName: \"kubernetes.io/projected/46796adc-7f57-405f-bb4c-a2ccb79153f2-kube-api-access-grhv4\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.335447 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.780449 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.844837 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.844949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.845049 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.845076 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.845235 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.846364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs" (OuterVolumeSpecName: "logs") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.863578 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7" (OuterVolumeSpecName: "kube-api-access-lvfz7") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "kube-api-access-lvfz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.888333 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.894161 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data" (OuterVolumeSpecName: "config-data") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.906973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.947795 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.947844 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.947860 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.947874 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949009 4869 generic.go:334] "Generic (PLEG): container finished" podID="19de8d9b-333e-4132-9b20-35258b84e935" containerID="060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" exitCode=0 Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerDied","Data":"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f"} Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949165 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerDied","Data":"f2995f40ac54472f74017bd157579158e7b1849e936f0eca8f4970077675a29d"} Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949187 4869 scope.go:117] "RemoveContainer" containerID="060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949222 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.956303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46796adc-7f57-405f-bb4c-a2ccb79153f2","Type":"ContainerStarted","Data":"b061032e19eaddc126231c75da55fdb1cc47af650877d0736bd1df81a7b8991e"} Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.971001 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.985288 4869 scope.go:117] "RemoveContainer" containerID="00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.021649 4869 scope.go:117] "RemoveContainer" containerID="060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" Feb 02 14:56:15 crc kubenswrapper[4869]: E0202 14:56:15.022287 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f\": container with ID starting with 060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f not found: ID does not exist" containerID="060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.022325 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f"} err="failed to get container status \"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f\": rpc error: code = NotFound desc = could not find container \"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f\": container with ID starting with 060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f not found: ID does not exist" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.022352 4869 scope.go:117] "RemoveContainer" containerID="00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" Feb 02 14:56:15 crc kubenswrapper[4869]: E0202 14:56:15.025358 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd\": container with ID starting with 00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd not found: ID does not exist" containerID="00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.025393 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd"} err="failed to get container status \"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd\": rpc error: code = NotFound desc = could not find container \"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd\": container with ID starting with 00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd not found: ID does not exist" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.048787 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.286675 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.297100 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.304992 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.305308 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.305382 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.306615 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.306693 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666" gracePeriod=600 Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314014 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:15 crc kubenswrapper[4869]: E0202 14:56:15.314566 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314597 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" Feb 02 14:56:15 crc kubenswrapper[4869]: E0202 14:56:15.314654 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314664 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314887 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314947 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.316011 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.324541 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.325087 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.327234 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355412 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c133ea7-0c2e-4338-a24b-319409d4e41a-logs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwf89\" (UniqueName: \"kubernetes.io/projected/0c133ea7-0c2e-4338-a24b-319409d4e41a-kube-api-access-xwf89\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355588 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-config-data\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355611 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457726 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-config-data\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c133ea7-0c2e-4338-a24b-319409d4e41a-logs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457929 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457964 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwf89\" (UniqueName: \"kubernetes.io/projected/0c133ea7-0c2e-4338-a24b-319409d4e41a-kube-api-access-xwf89\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.458451 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c133ea7-0c2e-4338-a24b-319409d4e41a-logs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.465269 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.467240 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.476577 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-config-data\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.478522 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19de8d9b-333e-4132-9b20-35258b84e935" path="/var/lib/kubelet/pods/19de8d9b-333e-4132-9b20-35258b84e935/volumes" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.479345 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" path="/var/lib/kubelet/pods/719e20f4-473b-4859-8730-d15fe8c662aa/volumes" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.481434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwf89\" (UniqueName: \"kubernetes.io/projected/0c133ea7-0c2e-4338-a24b-319409d4e41a-kube-api-access-xwf89\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.664673 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.993821 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666" exitCode=0 Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.994380 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666"} Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.994419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9"} Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.994444 4869 scope.go:117] "RemoveContainer" containerID="1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.997330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46796adc-7f57-405f-bb4c-a2ccb79153f2","Type":"ContainerStarted","Data":"74cce2da88f222488003067f7b34f7c51117b43c17f51b4d3fe102d888d2fa77"} Feb 02 14:56:16 crc kubenswrapper[4869]: I0202 14:56:16.045587 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.045564211 podStartE2EDuration="3.045564211s" podCreationTimestamp="2026-02-02 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:16.044455694 +0000 UTC m=+1377.689092484" watchObservedRunningTime="2026-02-02 14:56:16.045564211 +0000 UTC m=+1377.690200981" Feb 02 14:56:16 crc kubenswrapper[4869]: I0202 14:56:16.158555 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:17 crc kubenswrapper[4869]: I0202 14:56:17.016173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c133ea7-0c2e-4338-a24b-319409d4e41a","Type":"ContainerStarted","Data":"b0cb1b2d299f5b885b8ebda4139c41e9e524d39f49517385d35d41463db733a7"} Feb 02 14:56:17 crc kubenswrapper[4869]: I0202 14:56:17.016898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c133ea7-0c2e-4338-a24b-319409d4e41a","Type":"ContainerStarted","Data":"299487cfb600a0ff9459e9a0b6428d7aa8dc8703ed64dc09b0c82b39fdafed20"} Feb 02 14:56:17 crc kubenswrapper[4869]: I0202 14:56:17.016939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c133ea7-0c2e-4338-a24b-319409d4e41a","Type":"ContainerStarted","Data":"a0fe56cdaddddff2a1fd1474f11a5990f8338dc794e8b6342b28cfaa1f1b8386"} Feb 02 14:56:17 crc kubenswrapper[4869]: I0202 14:56:17.049560 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.049530457 podStartE2EDuration="2.049530457s" podCreationTimestamp="2026-02-02 14:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:17.045373135 +0000 UTC m=+1378.690009915" watchObservedRunningTime="2026-02-02 14:56:17.049530457 +0000 UTC m=+1378.694167227" Feb 02 14:56:19 crc kubenswrapper[4869]: I0202 14:56:19.336511 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 14:56:20 crc kubenswrapper[4869]: I0202 14:56:20.665840 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:56:20 crc kubenswrapper[4869]: I0202 14:56:20.666569 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:56:22 crc kubenswrapper[4869]: I0202 14:56:22.632002 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:56:22 crc kubenswrapper[4869]: I0202 14:56:22.632076 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:56:23 crc kubenswrapper[4869]: I0202 14:56:23.648151 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6f2e77f7-6ccb-4992-8292-e69f277dc8f2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:56:23 crc kubenswrapper[4869]: I0202 14:56:23.648144 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6f2e77f7-6ccb-4992-8292-e69f277dc8f2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:56:24 crc kubenswrapper[4869]: I0202 14:56:24.336511 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 14:56:24 crc kubenswrapper[4869]: I0202 14:56:24.366258 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 14:56:25 crc kubenswrapper[4869]: I0202 14:56:25.124559 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 14:56:25 crc kubenswrapper[4869]: I0202 14:56:25.665800 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 14:56:25 crc kubenswrapper[4869]: I0202 14:56:25.665886 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 14:56:26 crc kubenswrapper[4869]: I0202 14:56:26.680374 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0c133ea7-0c2e-4338-a24b-319409d4e41a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:56:26 crc kubenswrapper[4869]: I0202 14:56:26.680382 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0c133ea7-0c2e-4338-a24b-319409d4e41a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:56:31 crc kubenswrapper[4869]: I0202 14:56:31.181747 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 14:56:32 crc kubenswrapper[4869]: I0202 14:56:32.639495 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 14:56:32 crc kubenswrapper[4869]: I0202 14:56:32.641029 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 14:56:32 crc kubenswrapper[4869]: I0202 14:56:32.641508 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 14:56:32 crc kubenswrapper[4869]: I0202 14:56:32.653421 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 14:56:33 crc kubenswrapper[4869]: I0202 14:56:33.193643 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 14:56:33 crc kubenswrapper[4869]: I0202 14:56:33.204617 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 14:56:35 crc kubenswrapper[4869]: I0202 14:56:35.672054 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 14:56:35 crc kubenswrapper[4869]: I0202 14:56:35.672149 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 14:56:35 crc kubenswrapper[4869]: I0202 14:56:35.677990 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 14:56:35 crc kubenswrapper[4869]: I0202 14:56:35.680802 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 14:56:44 crc kubenswrapper[4869]: I0202 14:56:44.386976 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:46 crc kubenswrapper[4869]: I0202 14:56:46.950214 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:56:48 crc kubenswrapper[4869]: I0202 14:56:48.905317 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:56:48 crc kubenswrapper[4869]: I0202 14:56:48.908166 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:48 crc kubenswrapper[4869]: I0202 14:56:48.942379 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.054783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.055280 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.055489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.158058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.158621 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.158753 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.159117 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.159480 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.197262 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.235470 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.665617 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" containerID="cri-o://0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" gracePeriod=604795 Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.783239 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:56:50 crc kubenswrapper[4869]: I0202 14:56:50.411054 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e5afe82-077a-4545-84a3-54f108a39d37" containerID="a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58" exitCode=0 Feb 02 14:56:50 crc kubenswrapper[4869]: I0202 14:56:50.411515 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerDied","Data":"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58"} Feb 02 14:56:50 crc kubenswrapper[4869]: I0202 14:56:50.411567 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerStarted","Data":"d15cca6f8345e4d73be82151bb0e28ba11b1504dccb9fda5d84b628c49012abf"} Feb 02 14:56:52 crc kubenswrapper[4869]: I0202 14:56:52.418069 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" containerID="cri-o://7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" gracePeriod=604795 Feb 02 14:56:52 crc kubenswrapper[4869]: I0202 14:56:52.433583 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e5afe82-077a-4545-84a3-54f108a39d37" containerID="7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73" exitCode=0 Feb 02 14:56:52 crc kubenswrapper[4869]: I0202 14:56:52.433650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerDied","Data":"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73"} Feb 02 14:56:54 crc kubenswrapper[4869]: I0202 14:56:54.858460 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 02 14:56:55 crc kubenswrapper[4869]: I0202 14:56:55.213650 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Feb 02 14:56:55 crc kubenswrapper[4869]: I0202 14:56:55.499286 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerStarted","Data":"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56"} Feb 02 14:56:55 crc kubenswrapper[4869]: I0202 14:56:55.532428 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6x247" podStartSLOduration=3.116609496 podStartE2EDuration="7.532406762s" podCreationTimestamp="2026-02-02 14:56:48 +0000 UTC" firstStartedPulling="2026-02-02 14:56:50.415150169 +0000 UTC m=+1412.059786939" lastFinishedPulling="2026-02-02 14:56:54.830947435 +0000 UTC m=+1416.475584205" observedRunningTime="2026-02-02 14:56:55.52094041 +0000 UTC m=+1417.165577190" watchObservedRunningTime="2026-02-02 14:56:55.532406762 +0000 UTC m=+1417.177043552" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.286280 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.428870 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.428989 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429124 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429271 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429396 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.431073 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.431274 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.431290 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.445726 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.449358 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.450147 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.457155 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr" (OuterVolumeSpecName: "kube-api-access-jfjdr") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "kube-api-access-jfjdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.458123 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info" (OuterVolumeSpecName: "pod-info") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.527685 4869 generic.go:334] "Generic (PLEG): container finished" podID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerID="0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" exitCode=0 Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.529139 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.529200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerDied","Data":"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1"} Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.529301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerDied","Data":"71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b"} Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.529332 4869 scope.go:117] "RemoveContainer" containerID="0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533794 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533824 4869 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533855 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533889 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533899 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533930 4869 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533939 4869 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533949 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.550252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data" (OuterVolumeSpecName: "config-data") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.557149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf" (OuterVolumeSpecName: "server-conf") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.577949 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.602661 4869 scope.go:117] "RemoveContainer" containerID="9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.606046 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.628057 4869 scope.go:117] "RemoveContainer" containerID="0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" Feb 02 14:56:56 crc kubenswrapper[4869]: E0202 14:56:56.628781 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1\": container with ID starting with 0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1 not found: ID does not exist" containerID="0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.628886 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1"} err="failed to get container status \"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1\": rpc error: code = NotFound desc = could not find container \"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1\": container with ID starting with 0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1 not found: ID does not exist" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.628965 4869 scope.go:117] "RemoveContainer" containerID="9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7" Feb 02 14:56:56 crc kubenswrapper[4869]: E0202 14:56:56.629387 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7\": container with ID starting with 9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7 not found: ID does not exist" containerID="9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.629489 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7"} err="failed to get container status \"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7\": rpc error: code = NotFound desc = could not find container \"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7\": container with ID starting with 9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7 not found: ID does not exist" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.640301 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.640747 4869 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.640866 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.640963 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.879206 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.897803 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.912147 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:56 crc kubenswrapper[4869]: E0202 14:56:56.913033 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="setup-container" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.913151 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="setup-container" Feb 02 14:56:56 crc kubenswrapper[4869]: E0202 14:56:56.913264 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.913340 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.913650 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.915421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.918101 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.918400 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.918740 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.918936 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.922639 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.922849 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gjvp4" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.924814 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.936123 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049370 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049569 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d228ac68-eb5f-494a-bf43-6cbca346ae24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049673 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-config-data\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d228ac68-eb5f-494a-bf43-6cbca346ae24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049759 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049777 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fnq\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-kube-api-access-76fnq\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d228ac68-eb5f-494a-bf43-6cbca346ae24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152444 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d228ac68-eb5f-494a-bf43-6cbca346ae24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152462 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-config-data\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152491 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152510 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76fnq\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-kube-api-access-76fnq\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152604 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152636 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.153223 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.154284 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.154509 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.154579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-config-data\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.154880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.155053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.157877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d228ac68-eb5f-494a-bf43-6cbca346ae24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.158843 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.159769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.160118 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d228ac68-eb5f-494a-bf43-6cbca346ae24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.172958 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76fnq\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-kube-api-access-76fnq\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.193417 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.238854 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.477434 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" path="/var/lib/kubelet/pods/b339c96d-7eb1-4359-bcc3-6853622d5aa6/volumes" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.970978 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:57 crc kubenswrapper[4869]: W0202 14:56:57.980757 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd228ac68_eb5f_494a_bf43_6cbca346ae24.slice/crio-1a34a0e2d2d9310b9475603e1200965aa415948cbc5864f4bd0d6d919bfdd9df WatchSource:0}: Error finding container 1a34a0e2d2d9310b9475603e1200965aa415948cbc5864f4bd0d6d919bfdd9df: Status 404 returned error can't find the container with id 1a34a0e2d2d9310b9475603e1200965aa415948cbc5864f4bd0d6d919bfdd9df Feb 02 14:56:58 crc kubenswrapper[4869]: I0202 14:56:58.640000 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d228ac68-eb5f-494a-bf43-6cbca346ae24","Type":"ContainerStarted","Data":"1a34a0e2d2d9310b9475603e1200965aa415948cbc5864f4bd0d6d919bfdd9df"} Feb 02 14:56:59 crc kubenswrapper[4869]: I0202 14:56:59.236632 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:59 crc kubenswrapper[4869]: I0202 14:56:59.237114 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.280536 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6x247" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" probeResult="failure" output=< Feb 02 14:57:00 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 14:57:00 crc kubenswrapper[4869]: > Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.661423 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d228ac68-eb5f-494a-bf43-6cbca346ae24","Type":"ContainerStarted","Data":"b9c5ab38ce0f1b23eedeb1840f6aa6cf45b7beba13d99fdded4d92eee9ace4f8"} Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.750323 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.754443 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.757909 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.782713 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946631 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946865 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946901 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946954 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048712 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050249 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050266 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050637 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050679 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.079744 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.380182 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.536961 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678032 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678601 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678649 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678718 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678735 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678764 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678780 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678800 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678914 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.698344 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.701813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.704080 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.709789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info" (OuterVolumeSpecName: "pod-info") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.719586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.722961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.724206 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5" (OuterVolumeSpecName: "kube-api-access-zkxg5") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "kube-api-access-zkxg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.732275 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.732929 4869 generic.go:334] "Generic (PLEG): container finished" podID="95035071-a194-40ba-9b64-700ae3121dc4" containerID="7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" exitCode=0 Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.733460 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.734127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerDied","Data":"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa"} Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.734181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerDied","Data":"4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965"} Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.734205 4869 scope.go:117] "RemoveContainer" containerID="7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.757786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data" (OuterVolumeSpecName: "config-data") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.772696 4869 scope.go:117] "RemoveContainer" containerID="5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798650 4869 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798699 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798715 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798728 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798776 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798790 4869 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798803 4869 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798818 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798830 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.823948 4869 scope.go:117] "RemoveContainer" containerID="7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" Feb 02 14:57:01 crc kubenswrapper[4869]: E0202 14:57:01.824946 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa\": container with ID starting with 7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa not found: ID does not exist" containerID="7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.824983 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa"} err="failed to get container status \"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa\": rpc error: code = NotFound desc = could not find container \"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa\": container with ID starting with 7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa not found: ID does not exist" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.825011 4869 scope.go:117] "RemoveContainer" containerID="5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93" Feb 02 14:57:01 crc kubenswrapper[4869]: E0202 14:57:01.826768 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93\": container with ID starting with 5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93 not found: ID does not exist" containerID="5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.826798 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93"} err="failed to get container status \"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93\": rpc error: code = NotFound desc = could not find container \"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93\": container with ID starting with 5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93 not found: ID does not exist" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.836039 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.843449 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf" (OuterVolumeSpecName: "server-conf") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.880510 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.902390 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.902436 4869 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.902449 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.088439 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:02 crc kubenswrapper[4869]: W0202 14:57:02.091367 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6110b1ea_6ea9_454e_b77b_7c9d1373e376.slice/crio-6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b WatchSource:0}: Error finding container 6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b: Status 404 returned error can't find the container with id 6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.098256 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.106522 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.156130 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:02 crc kubenswrapper[4869]: E0202 14:57:02.157139 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.157161 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" Feb 02 14:57:02 crc kubenswrapper[4869]: E0202 14:57:02.157176 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="setup-container" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.157183 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="setup-container" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.157388 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.158606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.162756 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163022 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gtj7h" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163229 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163441 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163690 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163837 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.164071 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.172697 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322440 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebc9110-3186-4c3f-877b-44061d345584-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5qbk\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-kube-api-access-r5qbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322818 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebc9110-3186-4c3f-877b-44061d345584-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322865 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322887 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.424860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5qbk\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-kube-api-access-r5qbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.424910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.426910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebc9110-3186-4c3f-877b-44061d345584-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427294 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427571 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebc9110-3186-4c3f-877b-44061d345584-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427616 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427740 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.428256 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.428608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.429255 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.429909 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.431539 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.433173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.433818 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.434355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebc9110-3186-4c3f-877b-44061d345584-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.434516 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebc9110-3186-4c3f-877b-44061d345584-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.447883 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5qbk\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-kube-api-access-r5qbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.458345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.596279 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.758915 4869 generic.go:334] "Generic (PLEG): container finished" podID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerID="790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2" exitCode=0 Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.760162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerDied","Data":"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2"} Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.760271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerStarted","Data":"6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b"} Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.075098 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.476483 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95035071-a194-40ba-9b64-700ae3121dc4" path="/var/lib/kubelet/pods/95035071-a194-40ba-9b64-700ae3121dc4/volumes" Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.773727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebc9110-3186-4c3f-877b-44061d345584","Type":"ContainerStarted","Data":"c7359c171b09799208d5ca9c708ada6778b2861dc2f3c28fb5456f4c1ab1b124"} Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.776483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerStarted","Data":"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e"} Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.776721 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.806581 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-svw28" podStartSLOduration=3.806557462 podStartE2EDuration="3.806557462s" podCreationTimestamp="2026-02-02 14:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:57:03.79631893 +0000 UTC m=+1425.440955710" watchObservedRunningTime="2026-02-02 14:57:03.806557462 +0000 UTC m=+1425.451194242" Feb 02 14:57:04 crc kubenswrapper[4869]: I0202 14:57:04.791294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebc9110-3186-4c3f-877b-44061d345584","Type":"ContainerStarted","Data":"8ed64fb43d213aab79a419a4cea6e1ee2b793f4685da8dd0e3a8dc8cf9f27616"} Feb 02 14:57:09 crc kubenswrapper[4869]: I0202 14:57:09.290818 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:09 crc kubenswrapper[4869]: I0202 14:57:09.349681 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:09 crc kubenswrapper[4869]: I0202 14:57:09.533060 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:57:10 crc kubenswrapper[4869]: I0202 14:57:10.850375 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6x247" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" containerID="cri-o://b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" gracePeriod=2 Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.313529 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.383088 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.440468 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") pod \"4e5afe82-077a-4545-84a3-54f108a39d37\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.440729 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") pod \"4e5afe82-077a-4545-84a3-54f108a39d37\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.441055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") pod \"4e5afe82-077a-4545-84a3-54f108a39d37\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.442079 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities" (OuterVolumeSpecName: "utilities") pod "4e5afe82-077a-4545-84a3-54f108a39d37" (UID: "4e5afe82-077a-4545-84a3-54f108a39d37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.450588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc" (OuterVolumeSpecName: "kube-api-access-vlngc") pod "4e5afe82-077a-4545-84a3-54f108a39d37" (UID: "4e5afe82-077a-4545-84a3-54f108a39d37"). InnerVolumeSpecName "kube-api-access-vlngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.456311 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.456730 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="dnsmasq-dns" containerID="cri-o://498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7" gracePeriod=10 Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.543504 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.543532 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.612715 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e5afe82-077a-4545-84a3-54f108a39d37" (UID: "4e5afe82-077a-4545-84a3-54f108a39d37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.645410 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.695732 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 14:57:11 crc kubenswrapper[4869]: E0202 14:57:11.696353 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="extract-content" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.696372 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="extract-content" Feb 02 14:57:11 crc kubenswrapper[4869]: E0202 14:57:11.696397 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.696406 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" Feb 02 14:57:11 crc kubenswrapper[4869]: E0202 14:57:11.696426 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="extract-utilities" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.696435 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="extract-utilities" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.713563 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.718956 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.778300 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.857791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.858291 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.858500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.859602 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.860064 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.861753 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.876982 4869 generic.go:334] "Generic (PLEG): container finished" podID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerID="498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7" exitCode=0 Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.877309 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerDied","Data":"498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7"} Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889111 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e5afe82-077a-4545-84a3-54f108a39d37" containerID="b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" exitCode=0 Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerDied","Data":"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56"} Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889557 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerDied","Data":"d15cca6f8345e4d73be82151bb0e28ba11b1504dccb9fda5d84b628c49012abf"} Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889682 4869 scope.go:117] "RemoveContainer" containerID="b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889479 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.915167 4869 scope.go:117] "RemoveContainer" containerID="7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.961467 4869 scope.go:117] "RemoveContainer" containerID="a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.965991 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.967744 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.973220 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.974023 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.974607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.975362 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.992274 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.004249 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.013254 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.075325 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.217952 4869 scope.go:117] "RemoveContainer" containerID="b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" Feb 02 14:57:12 crc kubenswrapper[4869]: E0202 14:57:12.222539 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56\": container with ID starting with b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56 not found: ID does not exist" containerID="b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.222620 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56"} err="failed to get container status \"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56\": rpc error: code = NotFound desc = could not find container \"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56\": container with ID starting with b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56 not found: ID does not exist" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.222665 4869 scope.go:117] "RemoveContainer" containerID="7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73" Feb 02 14:57:12 crc kubenswrapper[4869]: E0202 14:57:12.223368 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73\": container with ID starting with 7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73 not found: ID does not exist" containerID="7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.223422 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73"} err="failed to get container status \"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73\": rpc error: code = NotFound desc = could not find container \"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73\": container with ID starting with 7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73 not found: ID does not exist" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.223462 4869 scope.go:117] "RemoveContainer" containerID="a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58" Feb 02 14:57:12 crc kubenswrapper[4869]: E0202 14:57:12.223856 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58\": container with ID starting with a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58 not found: ID does not exist" containerID="a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.223942 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58"} err="failed to get container status \"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58\": rpc error: code = NotFound desc = could not find container \"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58\": container with ID starting with a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58 not found: ID does not exist" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.251536 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.377959 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.378238 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.378361 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.378394 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.378465 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.384766 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp" (OuterVolumeSpecName: "kube-api-access-gf2cp") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "kube-api-access-gf2cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.429881 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.435707 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.440680 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config" (OuterVolumeSpecName: "config") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.441349 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495546 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495589 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495601 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495614 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495623 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.572739 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 14:57:12 crc kubenswrapper[4869]: W0202 14:57:12.575803 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod886da892_6808_4ff8_8fa4_48ad9cd65843.slice/crio-f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12 WatchSource:0}: Error finding container f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12: Status 404 returned error can't find the container with id f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12 Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.904321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerDied","Data":"b0a192cf90b2c34b440565bf71d8167abd947c406c2ba5f06b41ea7ba562f653"} Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.904367 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.905018 4869 scope.go:117] "RemoveContainer" containerID="498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.911288 4869 generic.go:334] "Generic (PLEG): container finished" podID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerID="267d2b5ca4d238e5b769ca48e7a762954290c341c2ea35ac8b67c09d6240f345" exitCode=0 Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.911421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerDied","Data":"267d2b5ca4d238e5b769ca48e7a762954290c341c2ea35ac8b67c09d6240f345"} Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.911521 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerStarted","Data":"f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12"} Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.963370 4869 scope.go:117] "RemoveContainer" containerID="8cf856a4df374f3980cbc2ddc8eb1618f3c5e7b2fc6a969f06245cd19d267eb6" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.990420 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.000171 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.474958 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" path="/var/lib/kubelet/pods/02258ec9-a572-417b-bb4c-35d0e5595e60/volumes" Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.475627 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" path="/var/lib/kubelet/pods/4e5afe82-077a-4545-84a3-54f108a39d37/volumes" Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.925373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerStarted","Data":"f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8"} Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.959738 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" podStartSLOduration=2.959708404 podStartE2EDuration="2.959708404s" podCreationTimestamp="2026-02-02 14:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:57:13.950009566 +0000 UTC m=+1435.594646356" watchObservedRunningTime="2026-02-02 14:57:13.959708404 +0000 UTC m=+1435.604345174" Feb 02 14:57:14 crc kubenswrapper[4869]: I0202 14:57:14.937946 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.078007 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.156605 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.157095 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-svw28" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="dnsmasq-dns" containerID="cri-o://7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" gracePeriod=10 Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.695473 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.853737 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854102 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854194 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854361 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.862224 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk" (OuterVolumeSpecName: "kube-api-access-lrdpk") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "kube-api-access-lrdpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.917262 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.922619 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.931524 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config" (OuterVolumeSpecName: "config") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.932624 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.943214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957707 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957768 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957783 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957799 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957849 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957863 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037033 4869 generic.go:334] "Generic (PLEG): container finished" podID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerID="7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" exitCode=0 Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerDied","Data":"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e"} Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerDied","Data":"6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b"} Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037137 4869 scope.go:117] "RemoveContainer" containerID="7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037320 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.081873 4869 scope.go:117] "RemoveContainer" containerID="790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.088068 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.098559 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.115617 4869 scope.go:117] "RemoveContainer" containerID="7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" Feb 02 14:57:23 crc kubenswrapper[4869]: E0202 14:57:23.116550 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e\": container with ID starting with 7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e not found: ID does not exist" containerID="7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.116628 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e"} err="failed to get container status \"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e\": rpc error: code = NotFound desc = could not find container \"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e\": container with ID starting with 7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e not found: ID does not exist" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.116667 4869 scope.go:117] "RemoveContainer" containerID="790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2" Feb 02 14:57:23 crc kubenswrapper[4869]: E0202 14:57:23.117273 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2\": container with ID starting with 790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2 not found: ID does not exist" containerID="790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.117318 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2"} err="failed to get container status \"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2\": rpc error: code = NotFound desc = could not find container \"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2\": container with ID starting with 790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2 not found: ID does not exist" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.482074 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" path="/var/lib/kubelet/pods/6110b1ea-6ea9-454e-b77b-7c9d1373e376/volumes" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.841841 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 14:57:27 crc kubenswrapper[4869]: E0202 14:57:27.844486 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="init" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.844580 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="init" Feb 02 14:57:27 crc kubenswrapper[4869]: E0202 14:57:27.844653 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="init" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.844747 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="init" Feb 02 14:57:27 crc kubenswrapper[4869]: E0202 14:57:27.844835 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.844936 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: E0202 14:57:27.845049 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.845109 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.845388 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.845492 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.846425 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.851448 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.853400 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.853765 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.854061 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.862891 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.964771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.964848 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.964896 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.965049 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.067091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.067186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.067253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.067320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.075285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.075776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.076063 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.086943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.187950 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.767814 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 14:57:29 crc kubenswrapper[4869]: I0202 14:57:29.103663 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" event={"ID":"3767bf04-261f-4a7b-9639-ae8002718621","Type":"ContainerStarted","Data":"5b8d0ac79d9a381090de5328513e4ac984ba5c97f5a488afb997d250b9c4b276"} Feb 02 14:57:32 crc kubenswrapper[4869]: I0202 14:57:32.137619 4869 generic.go:334] "Generic (PLEG): container finished" podID="d228ac68-eb5f-494a-bf43-6cbca346ae24" containerID="b9c5ab38ce0f1b23eedeb1840f6aa6cf45b7beba13d99fdded4d92eee9ace4f8" exitCode=0 Feb 02 14:57:32 crc kubenswrapper[4869]: I0202 14:57:32.137719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d228ac68-eb5f-494a-bf43-6cbca346ae24","Type":"ContainerDied","Data":"b9c5ab38ce0f1b23eedeb1840f6aa6cf45b7beba13d99fdded4d92eee9ace4f8"} Feb 02 14:57:37 crc kubenswrapper[4869]: I0202 14:57:37.217694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebc9110-3186-4c3f-877b-44061d345584","Type":"ContainerDied","Data":"8ed64fb43d213aab79a419a4cea6e1ee2b793f4685da8dd0e3a8dc8cf9f27616"} Feb 02 14:57:37 crc kubenswrapper[4869]: I0202 14:57:37.217710 4869 generic.go:334] "Generic (PLEG): container finished" podID="cebc9110-3186-4c3f-877b-44061d345584" containerID="8ed64fb43d213aab79a419a4cea6e1ee2b793f4685da8dd0e3a8dc8cf9f27616" exitCode=0 Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.246081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" event={"ID":"3767bf04-261f-4a7b-9639-ae8002718621","Type":"ContainerStarted","Data":"490db36993a771e14aff3fe8fc3bd15e52a119fe4a3a15db988f24da87af2b2a"} Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.249467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d228ac68-eb5f-494a-bf43-6cbca346ae24","Type":"ContainerStarted","Data":"5d09b3992a64c693b0a12274c0ee78e5a8fd50558706d5c9f19bfb09b5c8ce2c"} Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.249854 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.253059 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebc9110-3186-4c3f-877b-44061d345584","Type":"ContainerStarted","Data":"99d39ff21110e6011c04638632d69e563c1d763e9e580c53e69c86e83fce8681"} Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.253626 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.271922 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" podStartSLOduration=2.7375587660000003 podStartE2EDuration="12.271875529s" podCreationTimestamp="2026-02-02 14:57:27 +0000 UTC" firstStartedPulling="2026-02-02 14:57:28.776400295 +0000 UTC m=+1450.421037075" lastFinishedPulling="2026-02-02 14:57:38.310717068 +0000 UTC m=+1459.955353838" observedRunningTime="2026-02-02 14:57:39.266929557 +0000 UTC m=+1460.911566347" watchObservedRunningTime="2026-02-02 14:57:39.271875529 +0000 UTC m=+1460.916512299" Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.303137 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.303106819 podStartE2EDuration="37.303106819s" podCreationTimestamp="2026-02-02 14:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:57:39.292245871 +0000 UTC m=+1460.936882651" watchObservedRunningTime="2026-02-02 14:57:39.303106819 +0000 UTC m=+1460.947743589" Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.323363 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.323337408 podStartE2EDuration="43.323337408s" podCreationTimestamp="2026-02-02 14:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:57:39.315522625 +0000 UTC m=+1460.960159395" watchObservedRunningTime="2026-02-02 14:57:39.323337408 +0000 UTC m=+1460.967974178" Feb 02 14:57:50 crc kubenswrapper[4869]: I0202 14:57:50.363640 4869 generic.go:334] "Generic (PLEG): container finished" podID="3767bf04-261f-4a7b-9639-ae8002718621" containerID="490db36993a771e14aff3fe8fc3bd15e52a119fe4a3a15db988f24da87af2b2a" exitCode=0 Feb 02 14:57:50 crc kubenswrapper[4869]: I0202 14:57:50.363754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" event={"ID":"3767bf04-261f-4a7b-9639-ae8002718621","Type":"ContainerDied","Data":"490db36993a771e14aff3fe8fc3bd15e52a119fe4a3a15db988f24da87af2b2a"} Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.802999 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.913266 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") pod \"3767bf04-261f-4a7b-9639-ae8002718621\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.913645 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") pod \"3767bf04-261f-4a7b-9639-ae8002718621\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.913690 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") pod \"3767bf04-261f-4a7b-9639-ae8002718621\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.913747 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") pod \"3767bf04-261f-4a7b-9639-ae8002718621\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.921529 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3767bf04-261f-4a7b-9639-ae8002718621" (UID: "3767bf04-261f-4a7b-9639-ae8002718621"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.921514 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh" (OuterVolumeSpecName: "kube-api-access-vcrqh") pod "3767bf04-261f-4a7b-9639-ae8002718621" (UID: "3767bf04-261f-4a7b-9639-ae8002718621"). InnerVolumeSpecName "kube-api-access-vcrqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.946719 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3767bf04-261f-4a7b-9639-ae8002718621" (UID: "3767bf04-261f-4a7b-9639-ae8002718621"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.948361 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory" (OuterVolumeSpecName: "inventory") pod "3767bf04-261f-4a7b-9639-ae8002718621" (UID: "3767bf04-261f-4a7b-9639-ae8002718621"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.017884 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.028165 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.028207 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.028220 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.385102 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" event={"ID":"3767bf04-261f-4a7b-9639-ae8002718621","Type":"ContainerDied","Data":"5b8d0ac79d9a381090de5328513e4ac984ba5c97f5a488afb997d250b9c4b276"} Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.385171 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8d0ac79d9a381090de5328513e4ac984ba5c97f5a488afb997d250b9c4b276" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.385202 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.479572 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 14:57:52 crc kubenswrapper[4869]: E0202 14:57:52.480206 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3767bf04-261f-4a7b-9639-ae8002718621" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.480231 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3767bf04-261f-4a7b-9639-ae8002718621" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.480559 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3767bf04-261f-4a7b-9639-ae8002718621" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.481590 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.485145 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.485294 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.485305 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.485640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.494142 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.601676 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.644160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.644263 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.644302 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.644348 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.746766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.747622 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.748277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.748405 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.751898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.751930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.758905 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.770844 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.803704 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:53 crc kubenswrapper[4869]: I0202 14:57:53.477515 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 14:57:54 crc kubenswrapper[4869]: I0202 14:57:54.417374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" event={"ID":"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083","Type":"ContainerStarted","Data":"7d5e25ac19c483d6558c58fba2ace1e684808d4e3b1a821e0d5e58c6d0be0112"} Feb 02 14:57:54 crc kubenswrapper[4869]: I0202 14:57:54.417833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" event={"ID":"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083","Type":"ContainerStarted","Data":"45eb9092023474510986497b58938f8c056cf9410d12598b17849390008c5c0f"} Feb 02 14:57:54 crc kubenswrapper[4869]: I0202 14:57:54.447121 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" podStartSLOduration=1.978949684 podStartE2EDuration="2.447097959s" podCreationTimestamp="2026-02-02 14:57:52 +0000 UTC" firstStartedPulling="2026-02-02 14:57:53.488434189 +0000 UTC m=+1475.133070959" lastFinishedPulling="2026-02-02 14:57:53.956582464 +0000 UTC m=+1475.601219234" observedRunningTime="2026-02-02 14:57:54.437717627 +0000 UTC m=+1476.082354417" watchObservedRunningTime="2026-02-02 14:57:54.447097959 +0000 UTC m=+1476.091734729" Feb 02 14:57:57 crc kubenswrapper[4869]: I0202 14:57:57.245210 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 14:58:15 crc kubenswrapper[4869]: I0202 14:58:15.304790 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:58:15 crc kubenswrapper[4869]: I0202 14:58:15.305579 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:58:27 crc kubenswrapper[4869]: I0202 14:58:27.133390 4869 scope.go:117] "RemoveContainer" containerID="c0eba43d199f953d9626b7c88c284ea5aa7158b0c7b330e5e8b9495c554b8a8e" Feb 02 14:58:27 crc kubenswrapper[4869]: I0202 14:58:27.189472 4869 scope.go:117] "RemoveContainer" containerID="7ceee7ca0afb25fecb47c7d1ea7c643849b3e2a4371bef94fa2e91ed301777b9" Feb 02 14:58:45 crc kubenswrapper[4869]: I0202 14:58:45.304167 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:58:45 crc kubenswrapper[4869]: I0202 14:58:45.305067 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.303968 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.304835 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.304982 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.306034 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.306113 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" gracePeriod=600 Feb 02 14:59:15 crc kubenswrapper[4869]: E0202 14:59:15.440875 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.674430 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" exitCode=0 Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.674513 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9"} Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.674614 4869 scope.go:117] "RemoveContainer" containerID="c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.675810 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 14:59:15 crc kubenswrapper[4869]: E0202 14:59:15.676530 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.294941 4869 scope.go:117] "RemoveContainer" containerID="078449dfe9468d87dcfb0be258a6b0c80818d1519435a1c1a98664100d03e287" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.353431 4869 scope.go:117] "RemoveContainer" containerID="3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.383509 4869 scope.go:117] "RemoveContainer" containerID="40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.404519 4869 scope.go:117] "RemoveContainer" containerID="32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.422874 4869 scope.go:117] "RemoveContainer" containerID="5b057f5c2556a8f58e337485429c58bd6088b4c173270d5455938195918cef0b" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.445374 4869 scope.go:117] "RemoveContainer" containerID="905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.489093 4869 scope.go:117] "RemoveContainer" containerID="a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974" Feb 02 14:59:30 crc kubenswrapper[4869]: I0202 14:59:30.462705 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 14:59:30 crc kubenswrapper[4869]: E0202 14:59:30.463434 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.076372 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.079689 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.102168 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.230733 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.230813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.231169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.333445 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.333524 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.333619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.334226 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.334264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.359898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.440604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.013094 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.874778 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerID="e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935" exitCode=0 Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.874853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerDied","Data":"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935"} Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.875220 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerStarted","Data":"23d37e4273dff81b5dc1819ee91f3581a057e50a765066767ea6b2472724f6e3"} Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.877247 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:59:36 crc kubenswrapper[4869]: I0202 14:59:36.885136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerStarted","Data":"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4"} Feb 02 14:59:37 crc kubenswrapper[4869]: I0202 14:59:37.899852 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerID="7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4" exitCode=0 Feb 02 14:59:37 crc kubenswrapper[4869]: I0202 14:59:37.899948 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerDied","Data":"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4"} Feb 02 14:59:38 crc kubenswrapper[4869]: I0202 14:59:38.915094 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerStarted","Data":"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0"} Feb 02 14:59:38 crc kubenswrapper[4869]: I0202 14:59:38.940497 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jhqvw" podStartSLOduration=2.445735593 podStartE2EDuration="4.940461124s" podCreationTimestamp="2026-02-02 14:59:34 +0000 UTC" firstStartedPulling="2026-02-02 14:59:35.877013476 +0000 UTC m=+1577.521650246" lastFinishedPulling="2026-02-02 14:59:38.371739007 +0000 UTC m=+1580.016375777" observedRunningTime="2026-02-02 14:59:38.933757328 +0000 UTC m=+1580.578394098" watchObservedRunningTime="2026-02-02 14:59:38.940461124 +0000 UTC m=+1580.585097914" Feb 02 14:59:42 crc kubenswrapper[4869]: I0202 14:59:42.462737 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 14:59:42 crc kubenswrapper[4869]: E0202 14:59:42.463374 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 14:59:44 crc kubenswrapper[4869]: I0202 14:59:44.441096 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:44 crc kubenswrapper[4869]: I0202 14:59:44.441642 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:44 crc kubenswrapper[4869]: I0202 14:59:44.496942 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:45 crc kubenswrapper[4869]: I0202 14:59:45.033489 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:45 crc kubenswrapper[4869]: I0202 14:59:45.087539 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.005352 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jhqvw" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="registry-server" containerID="cri-o://9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" gracePeriod=2 Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.499833 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.694933 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") pod \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.695036 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") pod \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.695112 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") pod \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.696334 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities" (OuterVolumeSpecName: "utilities") pod "8d198208-3d2f-4b1f-986f-0cafce4c5ed5" (UID: "8d198208-3d2f-4b1f-986f-0cafce4c5ed5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.703489 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d" (OuterVolumeSpecName: "kube-api-access-qvw8d") pod "8d198208-3d2f-4b1f-986f-0cafce4c5ed5" (UID: "8d198208-3d2f-4b1f-986f-0cafce4c5ed5"). InnerVolumeSpecName "kube-api-access-qvw8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.745475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d198208-3d2f-4b1f-986f-0cafce4c5ed5" (UID: "8d198208-3d2f-4b1f-986f-0cafce4c5ed5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.797213 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.797250 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") on node \"crc\" DevicePath \"\"" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.797263 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017013 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerID="9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" exitCode=0 Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017059 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerDied","Data":"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0"} Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerDied","Data":"23d37e4273dff81b5dc1819ee91f3581a057e50a765066767ea6b2472724f6e3"} Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017135 4869 scope.go:117] "RemoveContainer" containerID="9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.046212 4869 scope.go:117] "RemoveContainer" containerID="7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.058724 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.077035 4869 scope.go:117] "RemoveContainer" containerID="e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.083432 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.121827 4869 scope.go:117] "RemoveContainer" containerID="9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" Feb 02 14:59:48 crc kubenswrapper[4869]: E0202 14:59:48.122205 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0\": container with ID starting with 9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0 not found: ID does not exist" containerID="9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.122239 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0"} err="failed to get container status \"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0\": rpc error: code = NotFound desc = could not find container \"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0\": container with ID starting with 9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0 not found: ID does not exist" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.122263 4869 scope.go:117] "RemoveContainer" containerID="7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4" Feb 02 14:59:48 crc kubenswrapper[4869]: E0202 14:59:48.122854 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4\": container with ID starting with 7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4 not found: ID does not exist" containerID="7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.122877 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4"} err="failed to get container status \"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4\": rpc error: code = NotFound desc = could not find container \"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4\": container with ID starting with 7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4 not found: ID does not exist" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.122890 4869 scope.go:117] "RemoveContainer" containerID="e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935" Feb 02 14:59:48 crc kubenswrapper[4869]: E0202 14:59:48.123435 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935\": container with ID starting with e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935 not found: ID does not exist" containerID="e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.123479 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935"} err="failed to get container status \"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935\": rpc error: code = NotFound desc = could not find container \"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935\": container with ID starting with e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935 not found: ID does not exist" Feb 02 14:59:49 crc kubenswrapper[4869]: I0202 14:59:49.480012 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" path="/var/lib/kubelet/pods/8d198208-3d2f-4b1f-986f-0cafce4c5ed5/volumes" Feb 02 14:59:57 crc kubenswrapper[4869]: I0202 14:59:57.463887 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 14:59:57 crc kubenswrapper[4869]: E0202 14:59:57.465321 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.164822 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:00:00 crc kubenswrapper[4869]: E0202 15:00:00.167105 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="extract-utilities" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.167135 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="extract-utilities" Feb 02 15:00:00 crc kubenswrapper[4869]: E0202 15:00:00.167148 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="registry-server" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.167154 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="registry-server" Feb 02 15:00:00 crc kubenswrapper[4869]: E0202 15:00:00.167163 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="extract-content" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.167169 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="extract-content" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.167421 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="registry-server" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.168226 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.170239 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.170681 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.183953 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.303180 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.303375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.303411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.405037 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.405632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.405672 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.406358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.413771 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.425244 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.496815 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.975838 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:00:01 crc kubenswrapper[4869]: I0202 15:00:01.146236 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" event={"ID":"2f7b8e70-b003-44d3-92f8-f3537d98f42f","Type":"ContainerStarted","Data":"3f8d9f91a4b30050fb71c3442bc23915a29c349a2821f57dd5239985970d263f"} Feb 02 15:00:02 crc kubenswrapper[4869]: I0202 15:00:02.168797 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" containerID="59bc9e2bf2a33d0613a4b3662bade576d4b886a4ed9586484e6fdba35d1e7e34" exitCode=0 Feb 02 15:00:02 crc kubenswrapper[4869]: I0202 15:00:02.169024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" event={"ID":"2f7b8e70-b003-44d3-92f8-f3537d98f42f","Type":"ContainerDied","Data":"59bc9e2bf2a33d0613a4b3662bade576d4b886a4ed9586484e6fdba35d1e7e34"} Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.586078 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.782192 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") pod \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.783011 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") pod \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.784099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") pod \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.784571 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f7b8e70-b003-44d3-92f8-f3537d98f42f" (UID: "2f7b8e70-b003-44d3-92f8-f3537d98f42f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.785733 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.790707 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f7b8e70-b003-44d3-92f8-f3537d98f42f" (UID: "2f7b8e70-b003-44d3-92f8-f3537d98f42f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.791218 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2" (OuterVolumeSpecName: "kube-api-access-pz9r2") pod "2f7b8e70-b003-44d3-92f8-f3537d98f42f" (UID: "2f7b8e70-b003-44d3-92f8-f3537d98f42f"). InnerVolumeSpecName "kube-api-access-pz9r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.889415 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.889455 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:04 crc kubenswrapper[4869]: I0202 15:00:04.189138 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" event={"ID":"2f7b8e70-b003-44d3-92f8-f3537d98f42f","Type":"ContainerDied","Data":"3f8d9f91a4b30050fb71c3442bc23915a29c349a2821f57dd5239985970d263f"} Feb 02 15:00:04 crc kubenswrapper[4869]: I0202 15:00:04.189205 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f8d9f91a4b30050fb71c3442bc23915a29c349a2821f57dd5239985970d263f" Feb 02 15:00:04 crc kubenswrapper[4869]: I0202 15:00:04.189203 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:08 crc kubenswrapper[4869]: I0202 15:00:08.463392 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:08 crc kubenswrapper[4869]: E0202 15:00:08.464414 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:19 crc kubenswrapper[4869]: I0202 15:00:19.474834 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:19 crc kubenswrapper[4869]: E0202 15:00:19.477441 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:27 crc kubenswrapper[4869]: I0202 15:00:27.575064 4869 scope.go:117] "RemoveContainer" containerID="5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e" Feb 02 15:00:27 crc kubenswrapper[4869]: I0202 15:00:27.602617 4869 scope.go:117] "RemoveContainer" containerID="387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37" Feb 02 15:00:27 crc kubenswrapper[4869]: I0202 15:00:27.627057 4869 scope.go:117] "RemoveContainer" containerID="2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f" Feb 02 15:00:27 crc kubenswrapper[4869]: I0202 15:00:27.645832 4869 scope.go:117] "RemoveContainer" containerID="ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5" Feb 02 15:00:30 crc kubenswrapper[4869]: I0202 15:00:30.463570 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:30 crc kubenswrapper[4869]: E0202 15:00:30.464233 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.135241 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:33 crc kubenswrapper[4869]: E0202 15:00:33.136515 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" containerName="collect-profiles" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.136548 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" containerName="collect-profiles" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.136933 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" containerName="collect-profiles" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.140370 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.152016 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.242980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.244404 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.244667 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.347520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.348315 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.348892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.349172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.349318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.381481 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.472671 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.997130 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:34 crc kubenswrapper[4869]: I0202 15:00:34.508315 4869 generic.go:334] "Generic (PLEG): container finished" podID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerID="30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37" exitCode=0 Feb 02 15:00:34 crc kubenswrapper[4869]: I0202 15:00:34.508368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerDied","Data":"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37"} Feb 02 15:00:34 crc kubenswrapper[4869]: I0202 15:00:34.508399 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerStarted","Data":"2759f5de4968e9862c72338bd1f481b3b6b44a2e19fea05d9f93d9a70f06d28a"} Feb 02 15:00:36 crc kubenswrapper[4869]: I0202 15:00:36.536835 4869 generic.go:334] "Generic (PLEG): container finished" podID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerID="a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb" exitCode=0 Feb 02 15:00:36 crc kubenswrapper[4869]: I0202 15:00:36.536958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerDied","Data":"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb"} Feb 02 15:00:37 crc kubenswrapper[4869]: I0202 15:00:37.549364 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerStarted","Data":"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d"} Feb 02 15:00:37 crc kubenswrapper[4869]: I0202 15:00:37.572386 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w7584" podStartSLOduration=2.070460626 podStartE2EDuration="4.572362493s" podCreationTimestamp="2026-02-02 15:00:33 +0000 UTC" firstStartedPulling="2026-02-02 15:00:34.511927811 +0000 UTC m=+1636.156564581" lastFinishedPulling="2026-02-02 15:00:37.013829678 +0000 UTC m=+1638.658466448" observedRunningTime="2026-02-02 15:00:37.569342199 +0000 UTC m=+1639.213978969" watchObservedRunningTime="2026-02-02 15:00:37.572362493 +0000 UTC m=+1639.216999263" Feb 02 15:00:43 crc kubenswrapper[4869]: I0202 15:00:43.473651 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:43 crc kubenswrapper[4869]: I0202 15:00:43.474030 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:43 crc kubenswrapper[4869]: I0202 15:00:43.552959 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:44 crc kubenswrapper[4869]: I0202 15:00:44.158421 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:44 crc kubenswrapper[4869]: I0202 15:00:44.232487 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:44 crc kubenswrapper[4869]: I0202 15:00:44.462452 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:44 crc kubenswrapper[4869]: E0202 15:00:44.462725 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.115016 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w7584" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="registry-server" containerID="cri-o://c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" gracePeriod=2 Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.227234 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.230609 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.244131 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.244507 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.244628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.267708 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.347814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.347960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.347993 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.348970 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.348982 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.378622 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.612804 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.622729 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.651865 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") pod \"844dd20e-3c4a-4900-91d4-5783dc09ffda\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.652408 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") pod \"844dd20e-3c4a-4900-91d4-5783dc09ffda\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.652461 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") pod \"844dd20e-3c4a-4900-91d4-5783dc09ffda\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.653664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities" (OuterVolumeSpecName: "utilities") pod "844dd20e-3c4a-4900-91d4-5783dc09ffda" (UID: "844dd20e-3c4a-4900-91d4-5783dc09ffda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.659604 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt" (OuterVolumeSpecName: "kube-api-access-hxgxt") pod "844dd20e-3c4a-4900-91d4-5783dc09ffda" (UID: "844dd20e-3c4a-4900-91d4-5783dc09ffda"). InnerVolumeSpecName "kube-api-access-hxgxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.709945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "844dd20e-3c4a-4900-91d4-5783dc09ffda" (UID: "844dd20e-3c4a-4900-91d4-5783dc09ffda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.753770 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.753815 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.753830 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.118269 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131343 4869 generic.go:334] "Generic (PLEG): container finished" podID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerID="c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" exitCode=0 Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerDied","Data":"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d"} Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131451 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131481 4869 scope.go:117] "RemoveContainer" containerID="c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerDied","Data":"2759f5de4968e9862c72338bd1f481b3b6b44a2e19fea05d9f93d9a70f06d28a"} Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.153693 4869 scope.go:117] "RemoveContainer" containerID="a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.184958 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.192640 4869 scope.go:117] "RemoveContainer" containerID="30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.209669 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.225310 4869 scope.go:117] "RemoveContainer" containerID="c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" Feb 02 15:00:47 crc kubenswrapper[4869]: E0202 15:00:47.226472 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d\": container with ID starting with c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d not found: ID does not exist" containerID="c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.226516 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d"} err="failed to get container status \"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d\": rpc error: code = NotFound desc = could not find container \"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d\": container with ID starting with c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d not found: ID does not exist" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.226541 4869 scope.go:117] "RemoveContainer" containerID="a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb" Feb 02 15:00:47 crc kubenswrapper[4869]: E0202 15:00:47.226893 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb\": container with ID starting with a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb not found: ID does not exist" containerID="a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.226951 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb"} err="failed to get container status \"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb\": rpc error: code = NotFound desc = could not find container \"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb\": container with ID starting with a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb not found: ID does not exist" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.226974 4869 scope.go:117] "RemoveContainer" containerID="30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37" Feb 02 15:00:47 crc kubenswrapper[4869]: E0202 15:00:47.227342 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37\": container with ID starting with 30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37 not found: ID does not exist" containerID="30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.227368 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37"} err="failed to get container status \"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37\": rpc error: code = NotFound desc = could not find container \"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37\": container with ID starting with 30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37 not found: ID does not exist" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.478112 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" path="/var/lib/kubelet/pods/844dd20e-3c4a-4900-91d4-5783dc09ffda/volumes" Feb 02 15:00:48 crc kubenswrapper[4869]: I0202 15:00:48.144626 4869 generic.go:334] "Generic (PLEG): container finished" podID="e52df171-dd1f-48e9-8dc7-06008925405b" containerID="27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a" exitCode=0 Feb 02 15:00:48 crc kubenswrapper[4869]: I0202 15:00:48.144685 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerDied","Data":"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a"} Feb 02 15:00:48 crc kubenswrapper[4869]: I0202 15:00:48.144715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerStarted","Data":"55d614c2f209f450d8b9684eaa80cfc66141e898c9070c7110dcb739f684745a"} Feb 02 15:00:49 crc kubenswrapper[4869]: I0202 15:00:49.157216 4869 generic.go:334] "Generic (PLEG): container finished" podID="e52df171-dd1f-48e9-8dc7-06008925405b" containerID="df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1" exitCode=0 Feb 02 15:00:49 crc kubenswrapper[4869]: I0202 15:00:49.157307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerDied","Data":"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1"} Feb 02 15:00:50 crc kubenswrapper[4869]: I0202 15:00:50.169162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerStarted","Data":"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd"} Feb 02 15:00:50 crc kubenswrapper[4869]: I0202 15:00:50.191098 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n4jws" podStartSLOduration=2.443487557 podStartE2EDuration="4.191074453s" podCreationTimestamp="2026-02-02 15:00:46 +0000 UTC" firstStartedPulling="2026-02-02 15:00:48.148352879 +0000 UTC m=+1649.792989649" lastFinishedPulling="2026-02-02 15:00:49.895939775 +0000 UTC m=+1651.540576545" observedRunningTime="2026-02-02 15:00:50.187521685 +0000 UTC m=+1651.832158455" watchObservedRunningTime="2026-02-02 15:00:50.191074453 +0000 UTC m=+1651.835711233" Feb 02 15:00:56 crc kubenswrapper[4869]: I0202 15:00:56.613334 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:56 crc kubenswrapper[4869]: I0202 15:00:56.613663 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:56 crc kubenswrapper[4869]: I0202 15:00:56.699321 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:57 crc kubenswrapper[4869]: I0202 15:00:57.311874 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:57 crc kubenswrapper[4869]: I0202 15:00:57.388207 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.273748 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n4jws" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="registry-server" containerID="cri-o://f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" gracePeriod=2 Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.471200 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:59 crc kubenswrapper[4869]: E0202 15:00:59.472113 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.723556 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.840925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") pod \"e52df171-dd1f-48e9-8dc7-06008925405b\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.841247 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") pod \"e52df171-dd1f-48e9-8dc7-06008925405b\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.841289 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") pod \"e52df171-dd1f-48e9-8dc7-06008925405b\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.842192 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities" (OuterVolumeSpecName: "utilities") pod "e52df171-dd1f-48e9-8dc7-06008925405b" (UID: "e52df171-dd1f-48e9-8dc7-06008925405b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.854394 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5" (OuterVolumeSpecName: "kube-api-access-p4pz5") pod "e52df171-dd1f-48e9-8dc7-06008925405b" (UID: "e52df171-dd1f-48e9-8dc7-06008925405b"). InnerVolumeSpecName "kube-api-access-p4pz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.877059 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e52df171-dd1f-48e9-8dc7-06008925405b" (UID: "e52df171-dd1f-48e9-8dc7-06008925405b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.944483 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.944538 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.944556 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.163491 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500741-9h6gs"] Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164084 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="extract-content" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164114 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="extract-content" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164130 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164197 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164212 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="extract-utilities" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164221 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="extract-utilities" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164240 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="extract-content" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164248 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="extract-content" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164277 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164286 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164300 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="extract-utilities" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164308 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="extract-utilities" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164557 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164593 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.165468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.182123 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500741-9h6gs"] Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.250750 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.250821 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.250982 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.251057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.286999 4869 generic.go:334] "Generic (PLEG): container finished" podID="e52df171-dd1f-48e9-8dc7-06008925405b" containerID="f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" exitCode=0 Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.287061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerDied","Data":"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd"} Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.287097 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerDied","Data":"55d614c2f209f450d8b9684eaa80cfc66141e898c9070c7110dcb739f684745a"} Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.287123 4869 scope.go:117] "RemoveContainer" containerID="f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.287320 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.327358 4869 scope.go:117] "RemoveContainer" containerID="df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.328069 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.341233 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.351021 4869 scope.go:117] "RemoveContainer" containerID="27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.352766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.352941 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.353034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.353086 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.357820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.358128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.358316 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.372131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.402783 4869 scope.go:117] "RemoveContainer" containerID="f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.403419 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd\": container with ID starting with f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd not found: ID does not exist" containerID="f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.403474 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd"} err="failed to get container status \"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd\": rpc error: code = NotFound desc = could not find container \"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd\": container with ID starting with f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd not found: ID does not exist" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.403505 4869 scope.go:117] "RemoveContainer" containerID="df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.403935 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1\": container with ID starting with df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1 not found: ID does not exist" containerID="df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.403970 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1"} err="failed to get container status \"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1\": rpc error: code = NotFound desc = could not find container \"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1\": container with ID starting with df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1 not found: ID does not exist" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.403993 4869 scope.go:117] "RemoveContainer" containerID="27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.404429 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a\": container with ID starting with 27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a not found: ID does not exist" containerID="27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.404473 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a"} err="failed to get container status \"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a\": rpc error: code = NotFound desc = could not find container \"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a\": container with ID starting with 27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a not found: ID does not exist" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.488346 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:01 crc kubenswrapper[4869]: I0202 15:01:01.472865 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" path="/var/lib/kubelet/pods/e52df171-dd1f-48e9-8dc7-06008925405b/volumes" Feb 02 15:01:01 crc kubenswrapper[4869]: I0202 15:01:01.579867 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500741-9h6gs"] Feb 02 15:01:02 crc kubenswrapper[4869]: I0202 15:01:02.312783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500741-9h6gs" event={"ID":"d6019cb5-097c-4e32-b08f-dd117d4bcdf7","Type":"ContainerStarted","Data":"94f7fa1eef8aa02c6c9da7b1e358bd9e6450b0e6b3255bb4c36f552b88386ebc"} Feb 02 15:01:02 crc kubenswrapper[4869]: I0202 15:01:02.313165 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500741-9h6gs" event={"ID":"d6019cb5-097c-4e32-b08f-dd117d4bcdf7","Type":"ContainerStarted","Data":"a84490eaf8fef5ba7482c489b20bc4e41988271328ca98b054c70e9288d7abae"} Feb 02 15:01:02 crc kubenswrapper[4869]: I0202 15:01:02.339702 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500741-9h6gs" podStartSLOduration=2.3396799489999998 podStartE2EDuration="2.339679949s" podCreationTimestamp="2026-02-02 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:01:02.329924299 +0000 UTC m=+1663.974561079" watchObservedRunningTime="2026-02-02 15:01:02.339679949 +0000 UTC m=+1663.984316719" Feb 02 15:01:04 crc kubenswrapper[4869]: I0202 15:01:04.336062 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6019cb5-097c-4e32-b08f-dd117d4bcdf7" containerID="94f7fa1eef8aa02c6c9da7b1e358bd9e6450b0e6b3255bb4c36f552b88386ebc" exitCode=0 Feb 02 15:01:04 crc kubenswrapper[4869]: I0202 15:01:04.336156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500741-9h6gs" event={"ID":"d6019cb5-097c-4e32-b08f-dd117d4bcdf7","Type":"ContainerDied","Data":"94f7fa1eef8aa02c6c9da7b1e358bd9e6450b0e6b3255bb4c36f552b88386ebc"} Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.696850 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.797617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") pod \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.798294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") pod \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.798350 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") pod \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.798442 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") pod \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.807716 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d6019cb5-097c-4e32-b08f-dd117d4bcdf7" (UID: "d6019cb5-097c-4e32-b08f-dd117d4bcdf7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.808325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8" (OuterVolumeSpecName: "kube-api-access-wb5m8") pod "d6019cb5-097c-4e32-b08f-dd117d4bcdf7" (UID: "d6019cb5-097c-4e32-b08f-dd117d4bcdf7"). InnerVolumeSpecName "kube-api-access-wb5m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.827423 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6019cb5-097c-4e32-b08f-dd117d4bcdf7" (UID: "d6019cb5-097c-4e32-b08f-dd117d4bcdf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.861819 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data" (OuterVolumeSpecName: "config-data") pod "d6019cb5-097c-4e32-b08f-dd117d4bcdf7" (UID: "d6019cb5-097c-4e32-b08f-dd117d4bcdf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.901465 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.901545 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.901563 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.901576 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:06 crc kubenswrapper[4869]: I0202 15:01:06.359152 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500741-9h6gs" event={"ID":"d6019cb5-097c-4e32-b08f-dd117d4bcdf7","Type":"ContainerDied","Data":"a84490eaf8fef5ba7482c489b20bc4e41988271328ca98b054c70e9288d7abae"} Feb 02 15:01:06 crc kubenswrapper[4869]: I0202 15:01:06.359200 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84490eaf8fef5ba7482c489b20bc4e41988271328ca98b054c70e9288d7abae" Feb 02 15:01:06 crc kubenswrapper[4869]: I0202 15:01:06.359259 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:13 crc kubenswrapper[4869]: I0202 15:01:13.463896 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:01:13 crc kubenswrapper[4869]: E0202 15:01:13.464510 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:01:25 crc kubenswrapper[4869]: I0202 15:01:25.463691 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:01:25 crc kubenswrapper[4869]: E0202 15:01:25.465042 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:01:31 crc kubenswrapper[4869]: I0202 15:01:31.084596 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 15:01:31 crc kubenswrapper[4869]: I0202 15:01:31.097750 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 15:01:31 crc kubenswrapper[4869]: I0202 15:01:31.477056 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" path="/var/lib/kubelet/pods/2cae9d7b-b1d0-4745-801d-14b5f1e5f959/volumes" Feb 02 15:01:32 crc kubenswrapper[4869]: I0202 15:01:32.044295 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 15:01:32 crc kubenswrapper[4869]: I0202 15:01:32.059547 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 15:01:32 crc kubenswrapper[4869]: I0202 15:01:32.070010 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 15:01:32 crc kubenswrapper[4869]: I0202 15:01:32.080850 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 15:01:33 crc kubenswrapper[4869]: I0202 15:01:33.482401 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ed4541-0cbb-4412-b054-fe72923fc2ba" path="/var/lib/kubelet/pods/57ed4541-0cbb-4412-b054-fe72923fc2ba/volumes" Feb 02 15:01:33 crc kubenswrapper[4869]: I0202 15:01:33.483829 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" path="/var/lib/kubelet/pods/fc85b87e-a9f7-4407-8f88-59b46f424fe5/volumes" Feb 02 15:01:36 crc kubenswrapper[4869]: I0202 15:01:36.462994 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:01:36 crc kubenswrapper[4869]: E0202 15:01:36.463672 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:01:38 crc kubenswrapper[4869]: I0202 15:01:38.047542 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 15:01:38 crc kubenswrapper[4869]: I0202 15:01:38.060073 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 15:01:39 crc kubenswrapper[4869]: I0202 15:01:39.480159 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667b6a5a-a090-407f-a4c1-229be7db4fbc" path="/var/lib/kubelet/pods/667b6a5a-a090-407f-a4c1-229be7db4fbc/volumes" Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.036220 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.048275 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.058070 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.067295 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.479877 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663a2e70-1d18-41b3-bc31-7e8b44f00450" path="/var/lib/kubelet/pods/663a2e70-1d18-41b3-bc31-7e8b44f00450/volumes" Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.480875 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695a8791-53fd-414d-af01-753483223d32" path="/var/lib/kubelet/pods/695a8791-53fd-414d-af01-753483223d32/volumes" Feb 02 15:01:43 crc kubenswrapper[4869]: I0202 15:01:43.752775 4869 generic.go:334] "Generic (PLEG): container finished" podID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" containerID="7d5e25ac19c483d6558c58fba2ace1e684808d4e3b1a821e0d5e58c6d0be0112" exitCode=0 Feb 02 15:01:43 crc kubenswrapper[4869]: I0202 15:01:43.753102 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" event={"ID":"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083","Type":"ContainerDied","Data":"7d5e25ac19c483d6558c58fba2ace1e684808d4e3b1a821e0d5e58c6d0be0112"} Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.189045 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.378035 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") pod \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.378694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") pod \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.379196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") pod \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.380404 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") pod \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.387836 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" (UID: "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.391577 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv" (OuterVolumeSpecName: "kube-api-access-pcpxv") pod "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" (UID: "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083"). InnerVolumeSpecName "kube-api-access-pcpxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.414351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory" (OuterVolumeSpecName: "inventory") pod "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" (UID: "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.431534 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" (UID: "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.484132 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.484384 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.484496 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.484568 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.776419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" event={"ID":"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083","Type":"ContainerDied","Data":"45eb9092023474510986497b58938f8c056cf9410d12598b17849390008c5c0f"} Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.776471 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45eb9092023474510986497b58938f8c056cf9410d12598b17849390008c5c0f" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.776562 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.865786 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:01:45 crc kubenswrapper[4869]: E0202 15:01:45.866336 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.866357 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:01:45 crc kubenswrapper[4869]: E0202 15:01:45.866384 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6019cb5-097c-4e32-b08f-dd117d4bcdf7" containerName="keystone-cron" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.866396 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6019cb5-097c-4e32-b08f-dd117d4bcdf7" containerName="keystone-cron" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.866576 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6019cb5-097c-4e32-b08f-dd117d4bcdf7" containerName="keystone-cron" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.866604 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.867272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.869991 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.870303 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.870447 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.870755 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.890219 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.994655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.994705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.994837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.098108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.098454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.098487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.108602 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.121620 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.138581 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.188107 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.726610 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.786593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" event={"ID":"b13d039a-826a-4431-a147-9550c40460d2","Type":"ContainerStarted","Data":"45cdf02dcf660f423cec4c8cf609c87cf1d944ff266f947e009a6246dcc81363"} Feb 02 15:01:47 crc kubenswrapper[4869]: I0202 15:01:47.801740 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" event={"ID":"b13d039a-826a-4431-a147-9550c40460d2","Type":"ContainerStarted","Data":"1780e4b116d1f7c5ebd11904a615204e47379474971f83c266f93d8577ef7a03"} Feb 02 15:01:47 crc kubenswrapper[4869]: I0202 15:01:47.821801 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" podStartSLOduration=2.284760652 podStartE2EDuration="2.821773939s" podCreationTimestamp="2026-02-02 15:01:45 +0000 UTC" firstStartedPulling="2026-02-02 15:01:46.7235349 +0000 UTC m=+1708.368171680" lastFinishedPulling="2026-02-02 15:01:47.260548207 +0000 UTC m=+1708.905184967" observedRunningTime="2026-02-02 15:01:47.815488273 +0000 UTC m=+1709.460125043" watchObservedRunningTime="2026-02-02 15:01:47.821773939 +0000 UTC m=+1709.466410709" Feb 02 15:01:50 crc kubenswrapper[4869]: I0202 15:01:50.032311 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 15:01:50 crc kubenswrapper[4869]: I0202 15:01:50.041073 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 15:01:50 crc kubenswrapper[4869]: I0202 15:01:50.462325 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:01:50 crc kubenswrapper[4869]: E0202 15:01:50.462580 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:01:51 crc kubenswrapper[4869]: I0202 15:01:51.475688 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cedd0523-58d4-494f-9d04-76029ad9ca4d" path="/var/lib/kubelet/pods/cedd0523-58d4-494f-9d04-76029ad9ca4d/volumes" Feb 02 15:02:05 crc kubenswrapper[4869]: I0202 15:02:05.462568 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:05 crc kubenswrapper[4869]: E0202 15:02:05.463468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:07 crc kubenswrapper[4869]: I0202 15:02:07.051631 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 15:02:07 crc kubenswrapper[4869]: I0202 15:02:07.077678 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 15:02:07 crc kubenswrapper[4869]: I0202 15:02:07.473675 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d01d875-1fd0-4d36-9077-337e2549b17c" path="/var/lib/kubelet/pods/8d01d875-1fd0-4d36-9077-337e2549b17c/volumes" Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.039270 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.052029 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.060498 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.068620 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.462889 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:20 crc kubenswrapper[4869]: E0202 15:02:20.464662 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:21 crc kubenswrapper[4869]: I0202 15:02:21.474804 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e52e3f-cffb-44c2-9532-d645fa630d61" path="/var/lib/kubelet/pods/66e52e3f-cffb-44c2-9532-d645fa630d61/volumes" Feb 02 15:02:21 crc kubenswrapper[4869]: I0202 15:02:21.475449 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a91413a-aa7c-4564-bf72-53071981cd62" path="/var/lib/kubelet/pods/8a91413a-aa7c-4564-bf72-53071981cd62/volumes" Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.045867 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.057634 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.065608 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.076469 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.085071 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.092109 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.103207 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.110902 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 15:02:25 crc kubenswrapper[4869]: I0202 15:02:25.474082 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa7f6b2-de14-408c-8960-662c2ab0e481" path="/var/lib/kubelet/pods/6aa7f6b2-de14-408c-8960-662c2ab0e481/volumes" Feb 02 15:02:25 crc kubenswrapper[4869]: I0202 15:02:25.475226 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5268e6d-82fe-45d8-a243-d37b326346a6" path="/var/lib/kubelet/pods/b5268e6d-82fe-45d8-a243-d37b326346a6/volumes" Feb 02 15:02:25 crc kubenswrapper[4869]: I0202 15:02:25.476363 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be36a818-4a20-4330-ade7-225a479d7e98" path="/var/lib/kubelet/pods/be36a818-4a20-4330-ade7-225a479d7e98/volumes" Feb 02 15:02:25 crc kubenswrapper[4869]: I0202 15:02:25.477742 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" path="/var/lib/kubelet/pods/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0/volumes" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.781868 4869 scope.go:117] "RemoveContainer" containerID="8ad30a46b6571b102d653acdd91c3117aa9caffad9f46651f8d10f3bce6d1da5" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.824896 4869 scope.go:117] "RemoveContainer" containerID="59d9f27d8d1ae8627d4c79fa51d4258f445b3484686b6e2d609c49071e26d3ff" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.879547 4869 scope.go:117] "RemoveContainer" containerID="fd9a1056bb847e46dd277ee512ce8a86dedc30d17b4d1ccaa855457de2552b81" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.927585 4869 scope.go:117] "RemoveContainer" containerID="d6f5aeb4cb8e140e0ec76f751f66f1f3334b226154def23e06d3735565e7a00e" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.965501 4869 scope.go:117] "RemoveContainer" containerID="bc23c4af30b56127451b57906851e79c3c56f83ff81cbe94961025e57448181c" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.020786 4869 scope.go:117] "RemoveContainer" containerID="6d8d94685f54694bdd3d654fd30340b20f11060d58afcb8b6db65cc019ab404b" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.053435 4869 scope.go:117] "RemoveContainer" containerID="213e1848995e356634b595c82a82047cb0a5c02652baad5bea2863f82f47bdbc" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.073756 4869 scope.go:117] "RemoveContainer" containerID="1e93de4900a661d5dcfe910c46bd9a967faddfa20ef1e38b79c228fa5ebb022d" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.091294 4869 scope.go:117] "RemoveContainer" containerID="df71e565c4a1044f26889a098a902ff1f6378130dffa835480e68b3744d9258f" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.112087 4869 scope.go:117] "RemoveContainer" containerID="9b15642290472abfbc4ace64421c6af055e5988041270bd6769c924998672a78" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.136748 4869 scope.go:117] "RemoveContainer" containerID="78a897732627685686d46c9cdceda0daa9d9401b96294c575ac6408193fb1e9d" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.158374 4869 scope.go:117] "RemoveContainer" containerID="787a10a68dc71dc578d2b7b04e714c6b6fd52e9d48dc7f1b9e14020160b32eec" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.188329 4869 scope.go:117] "RemoveContainer" containerID="a67405c792b46e1c7a87b10db412f756b77b32607171121e6cfbf4745d19567f" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.209503 4869 scope.go:117] "RemoveContainer" containerID="6bee5e75e372cb2aba6043898d69e0608376d17242ffd94d857f28f9662a9176" Feb 02 15:02:29 crc kubenswrapper[4869]: I0202 15:02:29.027829 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 15:02:29 crc kubenswrapper[4869]: I0202 15:02:29.036879 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 15:02:29 crc kubenswrapper[4869]: I0202 15:02:29.487280 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" path="/var/lib/kubelet/pods/2b3583d5-e064-4a64-89ba-a97a7fcc993d/volumes" Feb 02 15:02:31 crc kubenswrapper[4869]: I0202 15:02:31.462669 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:31 crc kubenswrapper[4869]: E0202 15:02:31.463305 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:44 crc kubenswrapper[4869]: I0202 15:02:44.463259 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:44 crc kubenswrapper[4869]: E0202 15:02:44.464105 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:55 crc kubenswrapper[4869]: I0202 15:02:55.540291 4869 generic.go:334] "Generic (PLEG): container finished" podID="b13d039a-826a-4431-a147-9550c40460d2" containerID="1780e4b116d1f7c5ebd11904a615204e47379474971f83c266f93d8577ef7a03" exitCode=0 Feb 02 15:02:55 crc kubenswrapper[4869]: I0202 15:02:55.540373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" event={"ID":"b13d039a-826a-4431-a147-9550c40460d2","Type":"ContainerDied","Data":"1780e4b116d1f7c5ebd11904a615204e47379474971f83c266f93d8577ef7a03"} Feb 02 15:02:56 crc kubenswrapper[4869]: I0202 15:02:56.462965 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:56 crc kubenswrapper[4869]: E0202 15:02:56.463581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:56 crc kubenswrapper[4869]: I0202 15:02:56.942996 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.111374 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") pod \"b13d039a-826a-4431-a147-9550c40460d2\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.111453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") pod \"b13d039a-826a-4431-a147-9550c40460d2\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.111630 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") pod \"b13d039a-826a-4431-a147-9550c40460d2\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.118322 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4" (OuterVolumeSpecName: "kube-api-access-frkw4") pod "b13d039a-826a-4431-a147-9550c40460d2" (UID: "b13d039a-826a-4431-a147-9550c40460d2"). InnerVolumeSpecName "kube-api-access-frkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.141788 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b13d039a-826a-4431-a147-9550c40460d2" (UID: "b13d039a-826a-4431-a147-9550c40460d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.155203 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory" (OuterVolumeSpecName: "inventory") pod "b13d039a-826a-4431-a147-9550c40460d2" (UID: "b13d039a-826a-4431-a147-9550c40460d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.214229 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") on node \"crc\" DevicePath \"\"" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.214281 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.214295 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.561453 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" event={"ID":"b13d039a-826a-4431-a147-9550c40460d2","Type":"ContainerDied","Data":"45cdf02dcf660f423cec4c8cf609c87cf1d944ff266f947e009a6246dcc81363"} Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.561508 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45cdf02dcf660f423cec4c8cf609c87cf1d944ff266f947e009a6246dcc81363" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.561584 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.645990 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:02:57 crc kubenswrapper[4869]: E0202 15:02:57.646640 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13d039a-826a-4431-a147-9550c40460d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.646668 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13d039a-826a-4431-a147-9550c40460d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.647005 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13d039a-826a-4431-a147-9550c40460d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.647979 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.651769 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.652056 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.652789 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.652972 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.665296 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.727878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.728399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.728583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.831108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.831172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.831213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.835195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.835530 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.851631 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.975915 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:58 crc kubenswrapper[4869]: I0202 15:02:58.526110 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:02:58 crc kubenswrapper[4869]: I0202 15:02:58.573621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" event={"ID":"a111a064-b5cf-4489-8262-1aef88170e07","Type":"ContainerStarted","Data":"a92d5787b2d570b9ee527185f349f290dbbb140166f0cf740ed0e7247ebd4c92"} Feb 02 15:02:59 crc kubenswrapper[4869]: I0202 15:02:59.582614 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" event={"ID":"a111a064-b5cf-4489-8262-1aef88170e07","Type":"ContainerStarted","Data":"e77dd6e80ad1057a4bcf30f60becbca014a57b0ad1a2095aca5495f54d7091d0"} Feb 02 15:02:59 crc kubenswrapper[4869]: I0202 15:02:59.609592 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" podStartSLOduration=2.188686635 podStartE2EDuration="2.609568434s" podCreationTimestamp="2026-02-02 15:02:57 +0000 UTC" firstStartedPulling="2026-02-02 15:02:58.528013388 +0000 UTC m=+1780.172650158" lastFinishedPulling="2026-02-02 15:02:58.948895187 +0000 UTC m=+1780.593531957" observedRunningTime="2026-02-02 15:02:59.608766785 +0000 UTC m=+1781.253403555" watchObservedRunningTime="2026-02-02 15:02:59.609568434 +0000 UTC m=+1781.254205204" Feb 02 15:03:01 crc kubenswrapper[4869]: I0202 15:03:01.079978 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 15:03:01 crc kubenswrapper[4869]: I0202 15:03:01.089591 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 15:03:01 crc kubenswrapper[4869]: I0202 15:03:01.481375 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367199b6-3340-454e-acc5-478f9b35b2df" path="/var/lib/kubelet/pods/367199b6-3340-454e-acc5-478f9b35b2df/volumes" Feb 02 15:03:04 crc kubenswrapper[4869]: I0202 15:03:04.670568 4869 generic.go:334] "Generic (PLEG): container finished" podID="a111a064-b5cf-4489-8262-1aef88170e07" containerID="e77dd6e80ad1057a4bcf30f60becbca014a57b0ad1a2095aca5495f54d7091d0" exitCode=0 Feb 02 15:03:04 crc kubenswrapper[4869]: I0202 15:03:04.670661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" event={"ID":"a111a064-b5cf-4489-8262-1aef88170e07","Type":"ContainerDied","Data":"e77dd6e80ad1057a4bcf30f60becbca014a57b0ad1a2095aca5495f54d7091d0"} Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.168372 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.220499 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") pod \"a111a064-b5cf-4489-8262-1aef88170e07\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.220621 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") pod \"a111a064-b5cf-4489-8262-1aef88170e07\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.220819 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") pod \"a111a064-b5cf-4489-8262-1aef88170e07\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.235152 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64" (OuterVolumeSpecName: "kube-api-access-6vs64") pod "a111a064-b5cf-4489-8262-1aef88170e07" (UID: "a111a064-b5cf-4489-8262-1aef88170e07"). InnerVolumeSpecName "kube-api-access-6vs64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.252540 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a111a064-b5cf-4489-8262-1aef88170e07" (UID: "a111a064-b5cf-4489-8262-1aef88170e07"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.254998 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory" (OuterVolumeSpecName: "inventory") pod "a111a064-b5cf-4489-8262-1aef88170e07" (UID: "a111a064-b5cf-4489-8262-1aef88170e07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.323332 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.323387 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.323404 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.695957 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" event={"ID":"a111a064-b5cf-4489-8262-1aef88170e07","Type":"ContainerDied","Data":"a92d5787b2d570b9ee527185f349f290dbbb140166f0cf740ed0e7247ebd4c92"} Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.696037 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92d5787b2d570b9ee527185f349f290dbbb140166f0cf740ed0e7247ebd4c92" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.696146 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.815482 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:03:06 crc kubenswrapper[4869]: E0202 15:03:06.817217 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a111a064-b5cf-4489-8262-1aef88170e07" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.817248 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a111a064-b5cf-4489-8262-1aef88170e07" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.817462 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a111a064-b5cf-4489-8262-1aef88170e07" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.818344 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.823749 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.823991 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.824115 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.824538 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.841141 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.951728 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.951798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.952019 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.054225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.054289 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.054326 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.055535 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.061933 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.062333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.080188 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.085231 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.093132 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.103264 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.116676 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.129459 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.140529 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.473338 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" path="/var/lib/kubelet/pods/2a5f9f47-1ba0-4d37-8597-874a62d9045e/volumes" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.474289 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818ee387-cf73-45bc-8925-c234d5fd8ee3" path="/var/lib/kubelet/pods/818ee387-cf73-45bc-8925-c234d5fd8ee3/volumes" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.474829 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" path="/var/lib/kubelet/pods/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b/volumes" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.680026 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:03:07 crc kubenswrapper[4869]: W0202 15:03:07.693939 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda82a77f6_7b23_4723_8ba7_a8754d3cc15f.slice/crio-387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe WatchSource:0}: Error finding container 387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe: Status 404 returned error can't find the container with id 387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.707965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" event={"ID":"a82a77f6-7b23-4723-8ba7-a8754d3cc15f","Type":"ContainerStarted","Data":"387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe"} Feb 02 15:03:08 crc kubenswrapper[4869]: I0202 15:03:08.729641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" event={"ID":"a82a77f6-7b23-4723-8ba7-a8754d3cc15f","Type":"ContainerStarted","Data":"6541835580f7732c564fce1cfc6a7a903f9541014fbd453cd8d73ffdda64ec00"} Feb 02 15:03:08 crc kubenswrapper[4869]: I0202 15:03:08.785425 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" podStartSLOduration=2.349037224 podStartE2EDuration="2.785397955s" podCreationTimestamp="2026-02-02 15:03:06 +0000 UTC" firstStartedPulling="2026-02-02 15:03:07.699130342 +0000 UTC m=+1789.343767112" lastFinishedPulling="2026-02-02 15:03:08.135491073 +0000 UTC m=+1789.780127843" observedRunningTime="2026-02-02 15:03:08.77662874 +0000 UTC m=+1790.421265530" watchObservedRunningTime="2026-02-02 15:03:08.785397955 +0000 UTC m=+1790.430034725" Feb 02 15:03:09 crc kubenswrapper[4869]: I0202 15:03:09.471272 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:03:09 crc kubenswrapper[4869]: E0202 15:03:09.471688 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:03:23 crc kubenswrapper[4869]: I0202 15:03:23.462947 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:03:23 crc kubenswrapper[4869]: E0202 15:03:23.464344 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:03:26 crc kubenswrapper[4869]: I0202 15:03:26.035125 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 15:03:26 crc kubenswrapper[4869]: I0202 15:03:26.044833 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 15:03:27 crc kubenswrapper[4869]: I0202 15:03:27.478726 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" path="/var/lib/kubelet/pods/f0e63b99-6d06-44ea-a061-b9f79551126a/volumes" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.432676 4869 scope.go:117] "RemoveContainer" containerID="0aa88d3b57202e0e2723bae5c11f79197f7959d3a183ef080d27b30920dc1f8a" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.479864 4869 scope.go:117] "RemoveContainer" containerID="cecab4e9b99e25e3a70710711bfe9446ff16abe3509be2bbfedce73c81eaeb89" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.526341 4869 scope.go:117] "RemoveContainer" containerID="8962be87127b6e0d3f3ece55fe53f40715482971642999f7d7b74c30b09eeea6" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.577479 4869 scope.go:117] "RemoveContainer" containerID="da76a4a0a2fd91d41e48fb82a3fd0ddaf3e6b22ad0d146b95f9759bc6eb3ab36" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.629721 4869 scope.go:117] "RemoveContainer" containerID="f5f3adb22514a5728bdaa407debd5241eb6b5669db2e00b862292c4751c58656" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.701476 4869 scope.go:117] "RemoveContainer" containerID="8bb80d715d8f5ab6d26df204394e8bf93606b57fc5408d917fc1dee2b0e16af2" Feb 02 15:03:38 crc kubenswrapper[4869]: I0202 15:03:38.462693 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:03:38 crc kubenswrapper[4869]: E0202 15:03:38.464101 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:03:43 crc kubenswrapper[4869]: I0202 15:03:43.040518 4869 generic.go:334] "Generic (PLEG): container finished" podID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" containerID="6541835580f7732c564fce1cfc6a7a903f9541014fbd453cd8d73ffdda64ec00" exitCode=0 Feb 02 15:03:43 crc kubenswrapper[4869]: I0202 15:03:43.040644 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" event={"ID":"a82a77f6-7b23-4723-8ba7-a8754d3cc15f","Type":"ContainerDied","Data":"6541835580f7732c564fce1cfc6a7a903f9541014fbd453cd8d73ffdda64ec00"} Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.549203 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.716984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") pod \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.717157 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") pod \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.717197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") pod \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.727443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2" (OuterVolumeSpecName: "kube-api-access-5fqt2") pod "a82a77f6-7b23-4723-8ba7-a8754d3cc15f" (UID: "a82a77f6-7b23-4723-8ba7-a8754d3cc15f"). InnerVolumeSpecName "kube-api-access-5fqt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.750698 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a82a77f6-7b23-4723-8ba7-a8754d3cc15f" (UID: "a82a77f6-7b23-4723-8ba7-a8754d3cc15f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.758664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory" (OuterVolumeSpecName: "inventory") pod "a82a77f6-7b23-4723-8ba7-a8754d3cc15f" (UID: "a82a77f6-7b23-4723-8ba7-a8754d3cc15f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.819896 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.819974 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.819991 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.060818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" event={"ID":"a82a77f6-7b23-4723-8ba7-a8754d3cc15f","Type":"ContainerDied","Data":"387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe"} Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.060861 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.060926 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.153274 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:03:45 crc kubenswrapper[4869]: E0202 15:03:45.153674 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.153688 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.153875 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.154871 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.158366 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.158813 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.159174 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.169853 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.170532 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.237768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.237852 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.237896 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.340235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.340316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.340354 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.346089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.346128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.359301 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.471230 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:46 crc kubenswrapper[4869]: I0202 15:03:46.148472 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:03:47 crc kubenswrapper[4869]: I0202 15:03:47.084680 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" event={"ID":"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56","Type":"ContainerStarted","Data":"96680a39ea5859acbd3d0dd33516c2456928e17934810aa50411921bfa3dafe9"} Feb 02 15:03:47 crc kubenswrapper[4869]: I0202 15:03:47.085295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" event={"ID":"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56","Type":"ContainerStarted","Data":"a6463f9b07a19640f75c366e973d6b134385522bb069d063749727ab03943faa"} Feb 02 15:03:47 crc kubenswrapper[4869]: I0202 15:03:47.117498 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" podStartSLOduration=1.700854248 podStartE2EDuration="2.117470053s" podCreationTimestamp="2026-02-02 15:03:45 +0000 UTC" firstStartedPulling="2026-02-02 15:03:46.152375 +0000 UTC m=+1827.797011770" lastFinishedPulling="2026-02-02 15:03:46.568990805 +0000 UTC m=+1828.213627575" observedRunningTime="2026-02-02 15:03:47.109016825 +0000 UTC m=+1828.753653635" watchObservedRunningTime="2026-02-02 15:03:47.117470053 +0000 UTC m=+1828.762106833" Feb 02 15:03:51 crc kubenswrapper[4869]: I0202 15:03:51.127120 4869 generic.go:334] "Generic (PLEG): container finished" podID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" containerID="96680a39ea5859acbd3d0dd33516c2456928e17934810aa50411921bfa3dafe9" exitCode=0 Feb 02 15:03:51 crc kubenswrapper[4869]: I0202 15:03:51.127240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" event={"ID":"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56","Type":"ContainerDied","Data":"96680a39ea5859acbd3d0dd33516c2456928e17934810aa50411921bfa3dafe9"} Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.627511 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.815175 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") pod \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.815316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") pod \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.815605 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") pod \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.824174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q" (OuterVolumeSpecName: "kube-api-access-gd79q") pod "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" (UID: "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56"). InnerVolumeSpecName "kube-api-access-gd79q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.849983 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" (UID: "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.857088 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory" (OuterVolumeSpecName: "inventory") pod "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" (UID: "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.918700 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.918754 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.918775 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.150062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" event={"ID":"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56","Type":"ContainerDied","Data":"a6463f9b07a19640f75c366e973d6b134385522bb069d063749727ab03943faa"} Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.150120 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6463f9b07a19640f75c366e973d6b134385522bb069d063749727ab03943faa" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.150150 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.238815 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:03:53 crc kubenswrapper[4869]: E0202 15:03:53.239601 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.239766 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.240892 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.241881 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.248889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.249274 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.249842 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.250306 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.261675 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.430154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.430686 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.431062 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.465055 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:03:53 crc kubenswrapper[4869]: E0202 15:03:53.465897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.532724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.533047 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.533090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.538500 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.538672 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.563492 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.862239 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:54 crc kubenswrapper[4869]: I0202 15:03:54.476464 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:03:54 crc kubenswrapper[4869]: W0202 15:03:54.480337 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ff5bea9_e74b_4810_b5b4_cc790c7c4289.slice/crio-8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215 WatchSource:0}: Error finding container 8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215: Status 404 returned error can't find the container with id 8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215 Feb 02 15:03:55 crc kubenswrapper[4869]: I0202 15:03:55.198501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" event={"ID":"5ff5bea9-e74b-4810-b5b4-cc790c7c4289","Type":"ContainerStarted","Data":"8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215"} Feb 02 15:03:56 crc kubenswrapper[4869]: I0202 15:03:56.209142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" event={"ID":"5ff5bea9-e74b-4810-b5b4-cc790c7c4289","Type":"ContainerStarted","Data":"522dc6652d2770764863c6c5c08ccb158c6f223a2af2d2d164167c9020c3eadc"} Feb 02 15:03:56 crc kubenswrapper[4869]: I0202 15:03:56.229167 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" podStartSLOduration=2.695998376 podStartE2EDuration="3.229135366s" podCreationTimestamp="2026-02-02 15:03:53 +0000 UTC" firstStartedPulling="2026-02-02 15:03:54.482974534 +0000 UTC m=+1836.127611304" lastFinishedPulling="2026-02-02 15:03:55.016111514 +0000 UTC m=+1836.660748294" observedRunningTime="2026-02-02 15:03:56.226504631 +0000 UTC m=+1837.871141411" watchObservedRunningTime="2026-02-02 15:03:56.229135366 +0000 UTC m=+1837.873772136" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.057432 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.072926 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.086127 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.098932 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.109660 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.121716 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.135849 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.148144 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.463900 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:04:07 crc kubenswrapper[4869]: E0202 15:04:07.464426 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.486668 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1748ab6-c795-414c-a52b-7bf549358524" path="/var/lib/kubelet/pods/b1748ab6-c795-414c-a52b-7bf549358524/volumes" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.487596 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" path="/var/lib/kubelet/pods/bdcf5e33-de9f-408f-8200-6f42fe0d0771/volumes" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.488462 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" path="/var/lib/kubelet/pods/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27/volumes" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.489432 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" path="/var/lib/kubelet/pods/dc7ca155-a072-4915-b5c5-e0b36a29af9b/volumes" Feb 02 15:04:08 crc kubenswrapper[4869]: I0202 15:04:08.030264 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 15:04:08 crc kubenswrapper[4869]: I0202 15:04:08.041315 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 15:04:08 crc kubenswrapper[4869]: I0202 15:04:08.055775 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 15:04:08 crc kubenswrapper[4869]: I0202 15:04:08.065191 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 15:04:09 crc kubenswrapper[4869]: I0202 15:04:09.475858 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" path="/var/lib/kubelet/pods/0ff7e998-18b9-4fbe-906a-d756f7cf16c6/volumes" Feb 02 15:04:09 crc kubenswrapper[4869]: I0202 15:04:09.476932 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" path="/var/lib/kubelet/pods/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1/volumes" Feb 02 15:04:18 crc kubenswrapper[4869]: I0202 15:04:18.462714 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:04:19 crc kubenswrapper[4869]: I0202 15:04:19.499207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b"} Feb 02 15:04:28 crc kubenswrapper[4869]: I0202 15:04:28.840823 4869 scope.go:117] "RemoveContainer" containerID="65c894d6caff283d8e12ca5ca2f52f63ea73a840cf785e78685f2636257f7088" Feb 02 15:04:28 crc kubenswrapper[4869]: I0202 15:04:28.869020 4869 scope.go:117] "RemoveContainer" containerID="99575408197da6f36edff3800154367961b49a995c8eac1c98ed312b3e5cddeb" Feb 02 15:04:28 crc kubenswrapper[4869]: I0202 15:04:28.915564 4869 scope.go:117] "RemoveContainer" containerID="d596a1a6b4874f02790897366970dbb255c9422002d2101a6f5f167dd8807bca" Feb 02 15:04:28 crc kubenswrapper[4869]: I0202 15:04:28.959264 4869 scope.go:117] "RemoveContainer" containerID="48561ec38ba8e1d863e22aea7226f624c163b5e704dc9c40612b25be2fba3af4" Feb 02 15:04:29 crc kubenswrapper[4869]: I0202 15:04:29.003831 4869 scope.go:117] "RemoveContainer" containerID="7a8d84378031a92f9cb60c774081e0424ba60a9436ccfe3c735c843dfed27fbb" Feb 02 15:04:29 crc kubenswrapper[4869]: I0202 15:04:29.045934 4869 scope.go:117] "RemoveContainer" containerID="94cbdab87b048c1314f2f73c2a849ceaf199319d9270e621070be8b05d642b46" Feb 02 15:04:42 crc kubenswrapper[4869]: I0202 15:04:42.715897 4869 generic.go:334] "Generic (PLEG): container finished" podID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" containerID="522dc6652d2770764863c6c5c08ccb158c6f223a2af2d2d164167c9020c3eadc" exitCode=0 Feb 02 15:04:42 crc kubenswrapper[4869]: I0202 15:04:42.716035 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" event={"ID":"5ff5bea9-e74b-4810-b5b4-cc790c7c4289","Type":"ContainerDied","Data":"522dc6652d2770764863c6c5c08ccb158c6f223a2af2d2d164167c9020c3eadc"} Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.248201 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.376456 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") pod \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.376517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") pod \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.376567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") pod \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.384347 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6" (OuterVolumeSpecName: "kube-api-access-b4fl6") pod "5ff5bea9-e74b-4810-b5b4-cc790c7c4289" (UID: "5ff5bea9-e74b-4810-b5b4-cc790c7c4289"). InnerVolumeSpecName "kube-api-access-b4fl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.412451 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ff5bea9-e74b-4810-b5b4-cc790c7c4289" (UID: "5ff5bea9-e74b-4810-b5b4-cc790c7c4289"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.416566 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory" (OuterVolumeSpecName: "inventory") pod "5ff5bea9-e74b-4810-b5b4-cc790c7c4289" (UID: "5ff5bea9-e74b-4810-b5b4-cc790c7c4289"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.478213 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.478683 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.478698 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.738405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" event={"ID":"5ff5bea9-e74b-4810-b5b4-cc790c7c4289","Type":"ContainerDied","Data":"8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215"} Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.738467 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.738472 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.858947 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:04:44 crc kubenswrapper[4869]: E0202 15:04:44.859493 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.859520 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.859775 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.860672 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.865861 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.868491 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.868826 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.868921 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.873062 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.989821 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.990625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.990667 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.093206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.093271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.093411 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.100206 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.106585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.113975 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.180578 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.802680 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.809754 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:04:46 crc kubenswrapper[4869]: I0202 15:04:46.761425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" event={"ID":"caa3992c-a98c-46cf-a41b-772d9b3c92eb","Type":"ContainerStarted","Data":"64ec45e26a2128c47c0bb7daf081c9f113c4f88a49f073769f3d890df34abd30"} Feb 02 15:04:46 crc kubenswrapper[4869]: I0202 15:04:46.761499 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" event={"ID":"caa3992c-a98c-46cf-a41b-772d9b3c92eb","Type":"ContainerStarted","Data":"137ad0d914e992def7b05e4f71444f097804e5499b20b256a8d8bf4cc936b429"} Feb 02 15:04:53 crc kubenswrapper[4869]: I0202 15:04:53.831663 4869 generic.go:334] "Generic (PLEG): container finished" podID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" containerID="64ec45e26a2128c47c0bb7daf081c9f113c4f88a49f073769f3d890df34abd30" exitCode=0 Feb 02 15:04:53 crc kubenswrapper[4869]: I0202 15:04:53.831799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" event={"ID":"caa3992c-a98c-46cf-a41b-772d9b3c92eb","Type":"ContainerDied","Data":"64ec45e26a2128c47c0bb7daf081c9f113c4f88a49f073769f3d890df34abd30"} Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.300834 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.381041 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") pod \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.381203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") pod \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.381242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") pod \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.390350 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr" (OuterVolumeSpecName: "kube-api-access-gtslr") pod "caa3992c-a98c-46cf-a41b-772d9b3c92eb" (UID: "caa3992c-a98c-46cf-a41b-772d9b3c92eb"). InnerVolumeSpecName "kube-api-access-gtslr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.413807 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "caa3992c-a98c-46cf-a41b-772d9b3c92eb" (UID: "caa3992c-a98c-46cf-a41b-772d9b3c92eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.428877 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "caa3992c-a98c-46cf-a41b-772d9b3c92eb" (UID: "caa3992c-a98c-46cf-a41b-772d9b3c92eb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.483436 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.483471 4869 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.483484 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.876145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" event={"ID":"caa3992c-a98c-46cf-a41b-772d9b3c92eb","Type":"ContainerDied","Data":"137ad0d914e992def7b05e4f71444f097804e5499b20b256a8d8bf4cc936b429"} Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.876242 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="137ad0d914e992def7b05e4f71444f097804e5499b20b256a8d8bf4cc936b429" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.876253 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.953536 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:04:55 crc kubenswrapper[4869]: E0202 15:04:55.954030 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.954052 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.954233 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.954933 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.957809 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.960394 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.961415 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.964749 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.991368 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.003546 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.003647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.003768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.106271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.106404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.106458 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.114423 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.114758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.141930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.295296 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.869264 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.887477 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" event={"ID":"fcac3e6a-7d05-4a46-a045-928dd040027d","Type":"ContainerStarted","Data":"c4d70035f88ebcd6c1428a838c4e4b58e0804e94158de6d2d295a9fdbd95c389"} Feb 02 15:04:57 crc kubenswrapper[4869]: I0202 15:04:57.903373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" event={"ID":"fcac3e6a-7d05-4a46-a045-928dd040027d","Type":"ContainerStarted","Data":"38d7a89ad8dafd903d91d39613d610dcd9e24c5bf586ce35754a68930252625d"} Feb 02 15:04:57 crc kubenswrapper[4869]: I0202 15:04:57.938530 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" podStartSLOduration=2.229065605 podStartE2EDuration="2.938510923s" podCreationTimestamp="2026-02-02 15:04:55 +0000 UTC" firstStartedPulling="2026-02-02 15:04:56.875497194 +0000 UTC m=+1898.520133974" lastFinishedPulling="2026-02-02 15:04:57.584942492 +0000 UTC m=+1899.229579292" observedRunningTime="2026-02-02 15:04:57.929894351 +0000 UTC m=+1899.574531121" watchObservedRunningTime="2026-02-02 15:04:57.938510923 +0000 UTC m=+1899.583147693" Feb 02 15:05:01 crc kubenswrapper[4869]: I0202 15:05:01.058939 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 15:05:01 crc kubenswrapper[4869]: I0202 15:05:01.069082 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 15:05:01 crc kubenswrapper[4869]: I0202 15:05:01.486439 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100a5963-124e-4354-8b5a-fadefef2a0a4" path="/var/lib/kubelet/pods/100a5963-124e-4354-8b5a-fadefef2a0a4/volumes" Feb 02 15:05:05 crc kubenswrapper[4869]: I0202 15:05:05.994443 4869 generic.go:334] "Generic (PLEG): container finished" podID="fcac3e6a-7d05-4a46-a045-928dd040027d" containerID="38d7a89ad8dafd903d91d39613d610dcd9e24c5bf586ce35754a68930252625d" exitCode=0 Feb 02 15:05:05 crc kubenswrapper[4869]: I0202 15:05:05.994470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" event={"ID":"fcac3e6a-7d05-4a46-a045-928dd040027d","Type":"ContainerDied","Data":"38d7a89ad8dafd903d91d39613d610dcd9e24c5bf586ce35754a68930252625d"} Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.497354 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.593209 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") pod \"fcac3e6a-7d05-4a46-a045-928dd040027d\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.593563 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") pod \"fcac3e6a-7d05-4a46-a045-928dd040027d\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.593606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") pod \"fcac3e6a-7d05-4a46-a045-928dd040027d\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.600945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2" (OuterVolumeSpecName: "kube-api-access-npjf2") pod "fcac3e6a-7d05-4a46-a045-928dd040027d" (UID: "fcac3e6a-7d05-4a46-a045-928dd040027d"). InnerVolumeSpecName "kube-api-access-npjf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.621376 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fcac3e6a-7d05-4a46-a045-928dd040027d" (UID: "fcac3e6a-7d05-4a46-a045-928dd040027d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.635086 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory" (OuterVolumeSpecName: "inventory") pod "fcac3e6a-7d05-4a46-a045-928dd040027d" (UID: "fcac3e6a-7d05-4a46-a045-928dd040027d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.699831 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.699887 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.699940 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.022154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" event={"ID":"fcac3e6a-7d05-4a46-a045-928dd040027d","Type":"ContainerDied","Data":"c4d70035f88ebcd6c1428a838c4e4b58e0804e94158de6d2d295a9fdbd95c389"} Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.022204 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4d70035f88ebcd6c1428a838c4e4b58e0804e94158de6d2d295a9fdbd95c389" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.022624 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.107758 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:05:08 crc kubenswrapper[4869]: E0202 15:05:08.108509 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcac3e6a-7d05-4a46-a045-928dd040027d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.108533 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcac3e6a-7d05-4a46-a045-928dd040027d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.108764 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcac3e6a-7d05-4a46-a045-928dd040027d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.109580 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.111747 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.112591 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.112832 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.112869 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.124192 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.211176 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.211315 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.211388 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.314648 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.314880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.315055 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.320603 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.322708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.339826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.430284 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:09 crc kubenswrapper[4869]: I0202 15:05:09.059710 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:05:10 crc kubenswrapper[4869]: I0202 15:05:10.043514 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" event={"ID":"a76d27b0-6cf8-4338-9022-1790d9544232","Type":"ContainerStarted","Data":"f55a47c4ff2286da3a6e2327eb568bde4d649c547bbd0bd0f76ad0552dc9b592"} Feb 02 15:05:10 crc kubenswrapper[4869]: I0202 15:05:10.043985 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" event={"ID":"a76d27b0-6cf8-4338-9022-1790d9544232","Type":"ContainerStarted","Data":"6edcd83683966681890eb9a0b53a8877255f0641e4e312e6e45a47caa7c492a2"} Feb 02 15:05:10 crc kubenswrapper[4869]: I0202 15:05:10.067891 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" podStartSLOduration=1.639370377 podStartE2EDuration="2.067863587s" podCreationTimestamp="2026-02-02 15:05:08 +0000 UTC" firstStartedPulling="2026-02-02 15:05:09.056294763 +0000 UTC m=+1910.700931533" lastFinishedPulling="2026-02-02 15:05:09.484787953 +0000 UTC m=+1911.129424743" observedRunningTime="2026-02-02 15:05:10.064193848 +0000 UTC m=+1911.708830628" watchObservedRunningTime="2026-02-02 15:05:10.067863587 +0000 UTC m=+1911.712500357" Feb 02 15:05:19 crc kubenswrapper[4869]: I0202 15:05:19.135687 4869 generic.go:334] "Generic (PLEG): container finished" podID="a76d27b0-6cf8-4338-9022-1790d9544232" containerID="f55a47c4ff2286da3a6e2327eb568bde4d649c547bbd0bd0f76ad0552dc9b592" exitCode=0 Feb 02 15:05:19 crc kubenswrapper[4869]: I0202 15:05:19.135818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" event={"ID":"a76d27b0-6cf8-4338-9022-1790d9544232","Type":"ContainerDied","Data":"f55a47c4ff2286da3a6e2327eb568bde4d649c547bbd0bd0f76ad0552dc9b592"} Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.567230 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.712957 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") pod \"a76d27b0-6cf8-4338-9022-1790d9544232\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.713089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") pod \"a76d27b0-6cf8-4338-9022-1790d9544232\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.713234 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") pod \"a76d27b0-6cf8-4338-9022-1790d9544232\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.725703 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs" (OuterVolumeSpecName: "kube-api-access-578gs") pod "a76d27b0-6cf8-4338-9022-1790d9544232" (UID: "a76d27b0-6cf8-4338-9022-1790d9544232"). InnerVolumeSpecName "kube-api-access-578gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.747738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory" (OuterVolumeSpecName: "inventory") pod "a76d27b0-6cf8-4338-9022-1790d9544232" (UID: "a76d27b0-6cf8-4338-9022-1790d9544232"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.747943 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a76d27b0-6cf8-4338-9022-1790d9544232" (UID: "a76d27b0-6cf8-4338-9022-1790d9544232"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.815741 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.815780 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.815793 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:21 crc kubenswrapper[4869]: I0202 15:05:21.173815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" event={"ID":"a76d27b0-6cf8-4338-9022-1790d9544232","Type":"ContainerDied","Data":"6edcd83683966681890eb9a0b53a8877255f0641e4e312e6e45a47caa7c492a2"} Feb 02 15:05:21 crc kubenswrapper[4869]: I0202 15:05:21.173873 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:21 crc kubenswrapper[4869]: I0202 15:05:21.173876 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6edcd83683966681890eb9a0b53a8877255f0641e4e312e6e45a47caa7c492a2" Feb 02 15:05:28 crc kubenswrapper[4869]: I0202 15:05:28.062759 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 15:05:28 crc kubenswrapper[4869]: I0202 15:05:28.076307 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 15:05:29 crc kubenswrapper[4869]: I0202 15:05:29.183747 4869 scope.go:117] "RemoveContainer" containerID="ebe1f428461f9ca88e79225425980e308f9e983a005ecc404634b54d8fbf41b8" Feb 02 15:05:29 crc kubenswrapper[4869]: I0202 15:05:29.484161 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" path="/var/lib/kubelet/pods/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0/volumes" Feb 02 15:05:31 crc kubenswrapper[4869]: I0202 15:05:31.046507 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 15:05:31 crc kubenswrapper[4869]: I0202 15:05:31.061835 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 15:05:31 crc kubenswrapper[4869]: I0202 15:05:31.482686 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" path="/var/lib/kubelet/pods/6c4bee65-28e6-4f62-a2b5-b4d9227c5624/volumes" Feb 02 15:06:10 crc kubenswrapper[4869]: I0202 15:06:10.058050 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 15:06:10 crc kubenswrapper[4869]: I0202 15:06:10.066894 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 15:06:11 crc kubenswrapper[4869]: I0202 15:06:11.474856 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" path="/var/lib/kubelet/pods/3e3908c6-0f4b-4b27-8f07-9851e54d845b/volumes" Feb 02 15:06:29 crc kubenswrapper[4869]: I0202 15:06:29.273495 4869 scope.go:117] "RemoveContainer" containerID="b0971dd6da0e21634706adc3fb0385fe86a85a8749020d44d9b581485a18729f" Feb 02 15:06:29 crc kubenswrapper[4869]: I0202 15:06:29.328741 4869 scope.go:117] "RemoveContainer" containerID="b53f792df7cff8163ee8a7592ca68143879b985452df8ad4b61543811725bc69" Feb 02 15:06:29 crc kubenswrapper[4869]: I0202 15:06:29.400290 4869 scope.go:117] "RemoveContainer" containerID="38dd79ef05a995974ad73195962d823416fb4b0c857e118492f50f15f1f25c17" Feb 02 15:06:45 crc kubenswrapper[4869]: I0202 15:06:45.304935 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:06:45 crc kubenswrapper[4869]: I0202 15:06:45.305774 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.245787 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:05 crc kubenswrapper[4869]: E0202 15:07:05.246873 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76d27b0-6cf8-4338-9022-1790d9544232" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.246893 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76d27b0-6cf8-4338-9022-1790d9544232" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.247173 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76d27b0-6cf8-4338-9022-1790d9544232" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.248842 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.269835 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.370829 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.371232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.371397 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.480789 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.481078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.481336 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.482784 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.487814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.518348 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.570571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:06 crc kubenswrapper[4869]: I0202 15:07:06.070661 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:06 crc kubenswrapper[4869]: I0202 15:07:06.727120 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerID="b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08" exitCode=0 Feb 02 15:07:06 crc kubenswrapper[4869]: I0202 15:07:06.727171 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerDied","Data":"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08"} Feb 02 15:07:06 crc kubenswrapper[4869]: I0202 15:07:06.727204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerStarted","Data":"74bd486d0462a49148f30c443349f935cd80e03bad245301c3d04dff5daeb9fe"} Feb 02 15:07:07 crc kubenswrapper[4869]: I0202 15:07:07.761253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerStarted","Data":"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb"} Feb 02 15:07:08 crc kubenswrapper[4869]: I0202 15:07:08.773547 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerID="d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb" exitCode=0 Feb 02 15:07:08 crc kubenswrapper[4869]: I0202 15:07:08.773764 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerDied","Data":"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb"} Feb 02 15:07:09 crc kubenswrapper[4869]: I0202 15:07:09.797211 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerStarted","Data":"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528"} Feb 02 15:07:09 crc kubenswrapper[4869]: I0202 15:07:09.829872 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-25ggf" podStartSLOduration=2.185128238 podStartE2EDuration="4.829851338s" podCreationTimestamp="2026-02-02 15:07:05 +0000 UTC" firstStartedPulling="2026-02-02 15:07:06.729562754 +0000 UTC m=+2028.374199524" lastFinishedPulling="2026-02-02 15:07:09.374285844 +0000 UTC m=+2031.018922624" observedRunningTime="2026-02-02 15:07:09.825786328 +0000 UTC m=+2031.470423098" watchObservedRunningTime="2026-02-02 15:07:09.829851338 +0000 UTC m=+2031.474488108" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.304742 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.305295 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.571357 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.571438 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.626236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.921177 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.975674 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:17 crc kubenswrapper[4869]: I0202 15:07:17.874870 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-25ggf" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="registry-server" containerID="cri-o://292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" gracePeriod=2 Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.399091 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.532724 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") pod \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.533069 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") pod \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.533179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") pod \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.534438 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities" (OuterVolumeSpecName: "utilities") pod "cc4fe44e-d1b4-4a2a-91ae-37134223e21e" (UID: "cc4fe44e-d1b4-4a2a-91ae-37134223e21e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.543167 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2" (OuterVolumeSpecName: "kube-api-access-wkzz2") pod "cc4fe44e-d1b4-4a2a-91ae-37134223e21e" (UID: "cc4fe44e-d1b4-4a2a-91ae-37134223e21e"). InnerVolumeSpecName "kube-api-access-wkzz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.637736 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") on node \"crc\" DevicePath \"\"" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.637792 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.719974 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc4fe44e-d1b4-4a2a-91ae-37134223e21e" (UID: "cc4fe44e-d1b4-4a2a-91ae-37134223e21e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.739947 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885117 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerID="292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" exitCode=0 Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885169 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerDied","Data":"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528"} Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885217 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885489 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerDied","Data":"74bd486d0462a49148f30c443349f935cd80e03bad245301c3d04dff5daeb9fe"} Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885522 4869 scope.go:117] "RemoveContainer" containerID="292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.921149 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.927788 4869 scope.go:117] "RemoveContainer" containerID="d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.930396 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.972046 4869 scope.go:117] "RemoveContainer" containerID="b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.005370 4869 scope.go:117] "RemoveContainer" containerID="292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" Feb 02 15:07:19 crc kubenswrapper[4869]: E0202 15:07:19.006090 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528\": container with ID starting with 292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528 not found: ID does not exist" containerID="292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.006163 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528"} err="failed to get container status \"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528\": rpc error: code = NotFound desc = could not find container \"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528\": container with ID starting with 292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528 not found: ID does not exist" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.006206 4869 scope.go:117] "RemoveContainer" containerID="d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb" Feb 02 15:07:19 crc kubenswrapper[4869]: E0202 15:07:19.006976 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb\": container with ID starting with d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb not found: ID does not exist" containerID="d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.007016 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb"} err="failed to get container status \"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb\": rpc error: code = NotFound desc = could not find container \"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb\": container with ID starting with d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb not found: ID does not exist" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.007046 4869 scope.go:117] "RemoveContainer" containerID="b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08" Feb 02 15:07:19 crc kubenswrapper[4869]: E0202 15:07:19.007388 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08\": container with ID starting with b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08 not found: ID does not exist" containerID="b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.007485 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08"} err="failed to get container status \"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08\": rpc error: code = NotFound desc = could not find container \"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08\": container with ID starting with b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08 not found: ID does not exist" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.484255 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" path="/var/lib/kubelet/pods/cc4fe44e-d1b4-4a2a-91ae-37134223e21e/volumes" Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.303991 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.304760 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.304824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.305458 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.305524 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b" gracePeriod=600 Feb 02 15:07:46 crc kubenswrapper[4869]: I0202 15:07:46.177975 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b" exitCode=0 Feb 02 15:07:46 crc kubenswrapper[4869]: I0202 15:07:46.178135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b"} Feb 02 15:07:46 crc kubenswrapper[4869]: I0202 15:07:46.178626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4"} Feb 02 15:07:46 crc kubenswrapper[4869]: I0202 15:07:46.178658 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:09:08 crc kubenswrapper[4869]: E0202 15:09:08.048249 4869 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.82:55034->38.129.56.82:44151: write tcp 38.129.56.82:55034->38.129.56.82:44151: write: broken pipe Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.433760 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.443712 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.453670 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.460662 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.477203 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.477259 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.480544 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.490808 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.500382 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.511586 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.520817 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.532963 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.547045 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.554793 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.565369 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.573808 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.580243 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.595063 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.608874 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.615578 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.481229 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3767bf04-261f-4a7b-9639-ae8002718621" path="/var/lib/kubelet/pods/3767bf04-261f-4a7b-9639-ae8002718621/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.482949 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" path="/var/lib/kubelet/pods/5ff5bea9-e74b-4810-b5b4-cc790c7c4289/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.484102 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" path="/var/lib/kubelet/pods/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.485218 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a111a064-b5cf-4489-8262-1aef88170e07" path="/var/lib/kubelet/pods/a111a064-b5cf-4489-8262-1aef88170e07/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.487131 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76d27b0-6cf8-4338-9022-1790d9544232" path="/var/lib/kubelet/pods/a76d27b0-6cf8-4338-9022-1790d9544232/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.487788 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" path="/var/lib/kubelet/pods/a82a77f6-7b23-4723-8ba7-a8754d3cc15f/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.488512 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" path="/var/lib/kubelet/pods/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.489965 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13d039a-826a-4431-a147-9550c40460d2" path="/var/lib/kubelet/pods/b13d039a-826a-4431-a147-9550c40460d2/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.490678 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" path="/var/lib/kubelet/pods/caa3992c-a98c-46cf-a41b-772d9b3c92eb/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.491393 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcac3e6a-7d05-4a46-a045-928dd040027d" path="/var/lib/kubelet/pods/fcac3e6a-7d05-4a46-a045-928dd040027d/volumes" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.123861 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d"] Feb 02 15:09:21 crc kubenswrapper[4869]: E0202 15:09:21.125001 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="registry-server" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.125017 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="registry-server" Feb 02 15:09:21 crc kubenswrapper[4869]: E0202 15:09:21.125048 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="extract-utilities" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.125056 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="extract-utilities" Feb 02 15:09:21 crc kubenswrapper[4869]: E0202 15:09:21.125095 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="extract-content" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.125102 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="extract-content" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.125295 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="registry-server" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.126117 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.129733 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.129980 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.130158 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.130287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.130392 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.148050 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d"] Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270101 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270157 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.372963 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.373430 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.373484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.373554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.373576 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.380951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.381129 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.382373 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.383992 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.395554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.476662 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:22 crc kubenswrapper[4869]: I0202 15:09:22.047300 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d"] Feb 02 15:09:22 crc kubenswrapper[4869]: W0202 15:09:22.051272 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09ba8528_6790_4df1_92c8_828f0ccd858e.slice/crio-a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab WatchSource:0}: Error finding container a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab: Status 404 returned error can't find the container with id a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab Feb 02 15:09:22 crc kubenswrapper[4869]: I0202 15:09:22.208122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" event={"ID":"09ba8528-6790-4df1-92c8-828f0ccd858e","Type":"ContainerStarted","Data":"a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab"} Feb 02 15:09:23 crc kubenswrapper[4869]: I0202 15:09:23.220413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" event={"ID":"09ba8528-6790-4df1-92c8-828f0ccd858e","Type":"ContainerStarted","Data":"34097c075399f58cc0213991bed63c10db09ada52f0b5c23038e8fb7bcde2a18"} Feb 02 15:09:23 crc kubenswrapper[4869]: I0202 15:09:23.244191 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" podStartSLOduration=1.727247861 podStartE2EDuration="2.244168428s" podCreationTimestamp="2026-02-02 15:09:21 +0000 UTC" firstStartedPulling="2026-02-02 15:09:22.054052653 +0000 UTC m=+2163.698689453" lastFinishedPulling="2026-02-02 15:09:22.57097323 +0000 UTC m=+2164.215610020" observedRunningTime="2026-02-02 15:09:23.24177597 +0000 UTC m=+2164.886412750" watchObservedRunningTime="2026-02-02 15:09:23.244168428 +0000 UTC m=+2164.888805218" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.608952 4869 scope.go:117] "RemoveContainer" containerID="1780e4b116d1f7c5ebd11904a615204e47379474971f83c266f93d8577ef7a03" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.679828 4869 scope.go:117] "RemoveContainer" containerID="6541835580f7732c564fce1cfc6a7a903f9541014fbd453cd8d73ffdda64ec00" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.760115 4869 scope.go:117] "RemoveContainer" containerID="490db36993a771e14aff3fe8fc3bd15e52a119fe4a3a15db988f24da87af2b2a" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.798841 4869 scope.go:117] "RemoveContainer" containerID="7d5e25ac19c483d6558c58fba2ace1e684808d4e3b1a821e0d5e58c6d0be0112" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.878871 4869 scope.go:117] "RemoveContainer" containerID="e77dd6e80ad1057a4bcf30f60becbca014a57b0ad1a2095aca5495f54d7091d0" Feb 02 15:09:34 crc kubenswrapper[4869]: I0202 15:09:34.329715 4869 generic.go:334] "Generic (PLEG): container finished" podID="09ba8528-6790-4df1-92c8-828f0ccd858e" containerID="34097c075399f58cc0213991bed63c10db09ada52f0b5c23038e8fb7bcde2a18" exitCode=0 Feb 02 15:09:34 crc kubenswrapper[4869]: I0202 15:09:34.329840 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" event={"ID":"09ba8528-6790-4df1-92c8-828f0ccd858e","Type":"ContainerDied","Data":"34097c075399f58cc0213991bed63c10db09ada52f0b5c23038e8fb7bcde2a18"} Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.375273 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.383656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.396707 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.514087 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.514456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.514543 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.616560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.616640 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.616671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.617250 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.617834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.644026 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.714163 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.880542 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025429 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025538 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025577 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.032003 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.042448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm" (OuterVolumeSpecName: "kube-api-access-p8whm") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "kube-api-access-p8whm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.044668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph" (OuterVolumeSpecName: "ceph") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.060156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory" (OuterVolumeSpecName: "inventory") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.061663 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128594 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128627 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128654 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128664 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128675 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.234267 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.351730 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.351727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" event={"ID":"09ba8528-6790-4df1-92c8-828f0ccd858e","Type":"ContainerDied","Data":"a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab"} Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.352051 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.354678 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerStarted","Data":"63ed0aa4ae4d75f86ca5c11797083a1158d148802874c80387bd8d541d90c5d0"} Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.446069 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2"] Feb 02 15:09:36 crc kubenswrapper[4869]: E0202 15:09:36.446807 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ba8528-6790-4df1-92c8-828f0ccd858e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.446827 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ba8528-6790-4df1-92c8-828f0ccd858e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.447072 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="09ba8528-6790-4df1-92c8-828f0ccd858e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.447948 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.461611 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2"] Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499104 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499412 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499537 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499432 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.541505 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.541547 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.542493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.542615 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.542679 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: E0202 15:09:36.596583 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca940380_14c0_4d24_950b_7aa523735f62.slice/crio-bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09ba8528_6790_4df1_92c8_828f0ccd858e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca940380_14c0_4d24_950b_7aa523735f62.slice/crio-conmon-bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336.scope\": RecentStats: unable to find data in memory cache]" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.645352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.645997 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.646067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.646109 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.646148 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.653864 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.653942 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.654244 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.654454 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.669173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.839097 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:37 crc kubenswrapper[4869]: I0202 15:09:37.376054 4869 generic.go:334] "Generic (PLEG): container finished" podID="ca940380-14c0-4d24-950b-7aa523735f62" containerID="bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336" exitCode=0 Feb 02 15:09:37 crc kubenswrapper[4869]: I0202 15:09:37.376593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerDied","Data":"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336"} Feb 02 15:09:37 crc kubenswrapper[4869]: I0202 15:09:37.437409 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2"] Feb 02 15:09:38 crc kubenswrapper[4869]: I0202 15:09:38.394856 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" event={"ID":"5ca847f3-12e0-43a7-af47-6739dc10627d","Type":"ContainerStarted","Data":"f0f59f64f18cd831b0ccbcfaeef9e58c704291972b6c59a787453f7131843bee"} Feb 02 15:09:38 crc kubenswrapper[4869]: I0202 15:09:38.395553 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" event={"ID":"5ca847f3-12e0-43a7-af47-6739dc10627d","Type":"ContainerStarted","Data":"af2ea32d786cda13426e5b56227ed5b1f4953e3931b299286158fd837d86464e"} Feb 02 15:09:38 crc kubenswrapper[4869]: I0202 15:09:38.423267 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" podStartSLOduration=2.004016996 podStartE2EDuration="2.423162924s" podCreationTimestamp="2026-02-02 15:09:36 +0000 UTC" firstStartedPulling="2026-02-02 15:09:37.450685117 +0000 UTC m=+2179.095321927" lastFinishedPulling="2026-02-02 15:09:37.869831075 +0000 UTC m=+2179.514467855" observedRunningTime="2026-02-02 15:09:38.41764557 +0000 UTC m=+2180.062282350" watchObservedRunningTime="2026-02-02 15:09:38.423162924 +0000 UTC m=+2180.067799714" Feb 02 15:09:39 crc kubenswrapper[4869]: I0202 15:09:39.407994 4869 generic.go:334] "Generic (PLEG): container finished" podID="ca940380-14c0-4d24-950b-7aa523735f62" containerID="f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270" exitCode=0 Feb 02 15:09:39 crc kubenswrapper[4869]: I0202 15:09:39.408086 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerDied","Data":"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270"} Feb 02 15:09:40 crc kubenswrapper[4869]: I0202 15:09:40.419184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerStarted","Data":"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866"} Feb 02 15:09:40 crc kubenswrapper[4869]: I0202 15:09:40.451819 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w66l5" podStartSLOduration=2.728468569 podStartE2EDuration="5.451801102s" podCreationTimestamp="2026-02-02 15:09:35 +0000 UTC" firstStartedPulling="2026-02-02 15:09:37.383286144 +0000 UTC m=+2179.027922954" lastFinishedPulling="2026-02-02 15:09:40.106618727 +0000 UTC m=+2181.751255487" observedRunningTime="2026-02-02 15:09:40.446590974 +0000 UTC m=+2182.091227734" watchObservedRunningTime="2026-02-02 15:09:40.451801102 +0000 UTC m=+2182.096437872" Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.307018 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.307964 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.716253 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.716343 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.767873 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:46 crc kubenswrapper[4869]: I0202 15:09:46.544290 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:46 crc kubenswrapper[4869]: I0202 15:09:46.641045 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:48 crc kubenswrapper[4869]: I0202 15:09:48.500769 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w66l5" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="registry-server" containerID="cri-o://d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" gracePeriod=2 Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.041584 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.148358 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") pod \"ca940380-14c0-4d24-950b-7aa523735f62\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.148455 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") pod \"ca940380-14c0-4d24-950b-7aa523735f62\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.148723 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") pod \"ca940380-14c0-4d24-950b-7aa523735f62\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.149486 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities" (OuterVolumeSpecName: "utilities") pod "ca940380-14c0-4d24-950b-7aa523735f62" (UID: "ca940380-14c0-4d24-950b-7aa523735f62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.157380 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d" (OuterVolumeSpecName: "kube-api-access-8w66d") pod "ca940380-14c0-4d24-950b-7aa523735f62" (UID: "ca940380-14c0-4d24-950b-7aa523735f62"). InnerVolumeSpecName "kube-api-access-8w66d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.211416 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca940380-14c0-4d24-950b-7aa523735f62" (UID: "ca940380-14c0-4d24-950b-7aa523735f62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.251726 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.251762 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.251773 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518311 4869 generic.go:334] "Generic (PLEG): container finished" podID="ca940380-14c0-4d24-950b-7aa523735f62" containerID="d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" exitCode=0 Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518393 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerDied","Data":"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866"} Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerDied","Data":"63ed0aa4ae4d75f86ca5c11797083a1158d148802874c80387bd8d541d90c5d0"} Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518486 4869 scope.go:117] "RemoveContainer" containerID="d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518783 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.558749 4869 scope.go:117] "RemoveContainer" containerID="f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.567733 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.579491 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.591327 4869 scope.go:117] "RemoveContainer" containerID="bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.637177 4869 scope.go:117] "RemoveContainer" containerID="d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" Feb 02 15:09:49 crc kubenswrapper[4869]: E0202 15:09:49.637840 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866\": container with ID starting with d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866 not found: ID does not exist" containerID="d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.637895 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866"} err="failed to get container status \"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866\": rpc error: code = NotFound desc = could not find container \"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866\": container with ID starting with d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866 not found: ID does not exist" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.637937 4869 scope.go:117] "RemoveContainer" containerID="f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270" Feb 02 15:09:49 crc kubenswrapper[4869]: E0202 15:09:49.638192 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270\": container with ID starting with f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270 not found: ID does not exist" containerID="f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.638229 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270"} err="failed to get container status \"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270\": rpc error: code = NotFound desc = could not find container \"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270\": container with ID starting with f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270 not found: ID does not exist" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.638250 4869 scope.go:117] "RemoveContainer" containerID="bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336" Feb 02 15:09:49 crc kubenswrapper[4869]: E0202 15:09:49.638627 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336\": container with ID starting with bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336 not found: ID does not exist" containerID="bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.638643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336"} err="failed to get container status \"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336\": rpc error: code = NotFound desc = could not find container \"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336\": container with ID starting with bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336 not found: ID does not exist" Feb 02 15:09:51 crc kubenswrapper[4869]: I0202 15:09:51.478700 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca940380-14c0-4d24-950b-7aa523735f62" path="/var/lib/kubelet/pods/ca940380-14c0-4d24-950b-7aa523735f62/volumes" Feb 02 15:10:15 crc kubenswrapper[4869]: I0202 15:10:15.304708 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:10:15 crc kubenswrapper[4869]: I0202 15:10:15.305527 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:10:30 crc kubenswrapper[4869]: I0202 15:10:30.054788 4869 scope.go:117] "RemoveContainer" containerID="522dc6652d2770764863c6c5c08ccb158c6f223a2af2d2d164167c9020c3eadc" Feb 02 15:10:30 crc kubenswrapper[4869]: I0202 15:10:30.112375 4869 scope.go:117] "RemoveContainer" containerID="96680a39ea5859acbd3d0dd33516c2456928e17934810aa50411921bfa3dafe9" Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.304958 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.305695 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.305766 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.306872 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.307061 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" gracePeriod=600 Feb 02 15:10:45 crc kubenswrapper[4869]: E0202 15:10:45.438515 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:10:46 crc kubenswrapper[4869]: I0202 15:10:46.116443 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" exitCode=0 Feb 02 15:10:46 crc kubenswrapper[4869]: I0202 15:10:46.116707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4"} Feb 02 15:10:46 crc kubenswrapper[4869]: I0202 15:10:46.116803 4869 scope.go:117] "RemoveContainer" containerID="e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b" Feb 02 15:10:46 crc kubenswrapper[4869]: I0202 15:10:46.118268 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:10:46 crc kubenswrapper[4869]: E0202 15:10:46.119098 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.755665 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:10:52 crc kubenswrapper[4869]: E0202 15:10:52.757055 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="extract-utilities" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.757080 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="extract-utilities" Feb 02 15:10:52 crc kubenswrapper[4869]: E0202 15:10:52.757106 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="registry-server" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.757121 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="registry-server" Feb 02 15:10:52 crc kubenswrapper[4869]: E0202 15:10:52.757154 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="extract-content" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.757165 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="extract-content" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.757407 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="registry-server" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.759359 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.785019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.862718 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.862783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.862853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.965800 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.965886 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.966029 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.966834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.971372 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.988190 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:53 crc kubenswrapper[4869]: I0202 15:10:53.094337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:53 crc kubenswrapper[4869]: I0202 15:10:53.666787 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:10:54 crc kubenswrapper[4869]: I0202 15:10:54.209996 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerID="7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4" exitCode=0 Feb 02 15:10:54 crc kubenswrapper[4869]: I0202 15:10:54.210064 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerDied","Data":"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4"} Feb 02 15:10:54 crc kubenswrapper[4869]: I0202 15:10:54.210101 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerStarted","Data":"ef25471803dfe9339a9d1b0293283644c98a8f02010d70dbd37f66e7576d60e8"} Feb 02 15:10:54 crc kubenswrapper[4869]: I0202 15:10:54.214561 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:10:56 crc kubenswrapper[4869]: I0202 15:10:56.233623 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerID="e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f" exitCode=0 Feb 02 15:10:56 crc kubenswrapper[4869]: I0202 15:10:56.233755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerDied","Data":"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f"} Feb 02 15:10:57 crc kubenswrapper[4869]: I0202 15:10:57.251728 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerStarted","Data":"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd"} Feb 02 15:10:57 crc kubenswrapper[4869]: I0202 15:10:57.283660 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rfjq8" podStartSLOduration=2.850367072 podStartE2EDuration="5.283635002s" podCreationTimestamp="2026-02-02 15:10:52 +0000 UTC" firstStartedPulling="2026-02-02 15:10:54.21416428 +0000 UTC m=+2255.858801060" lastFinishedPulling="2026-02-02 15:10:56.6474322 +0000 UTC m=+2258.292068990" observedRunningTime="2026-02-02 15:10:57.27661397 +0000 UTC m=+2258.921250750" watchObservedRunningTime="2026-02-02 15:10:57.283635002 +0000 UTC m=+2258.928271802" Feb 02 15:10:57 crc kubenswrapper[4869]: I0202 15:10:57.463577 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:10:57 crc kubenswrapper[4869]: E0202 15:10:57.463953 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.094797 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.095575 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.145615 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.364189 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.420689 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.327099 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rfjq8" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="registry-server" containerID="cri-o://f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" gracePeriod=2 Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.857368 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.880546 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") pod \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.880696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") pod \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.880730 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") pod \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.881436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities" (OuterVolumeSpecName: "utilities") pod "1ddeefe1-3e9c-4576-b226-e8c3b6462947" (UID: "1ddeefe1-3e9c-4576-b226-e8c3b6462947"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.892290 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp" (OuterVolumeSpecName: "kube-api-access-jvhsp") pod "1ddeefe1-3e9c-4576-b226-e8c3b6462947" (UID: "1ddeefe1-3e9c-4576-b226-e8c3b6462947"). InnerVolumeSpecName "kube-api-access-jvhsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.957564 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ddeefe1-3e9c-4576-b226-e8c3b6462947" (UID: "1ddeefe1-3e9c-4576-b226-e8c3b6462947"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.982300 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.982351 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.982364 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.342674 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerID="f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" exitCode=0 Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.342780 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.342796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerDied","Data":"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd"} Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.343286 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerDied","Data":"ef25471803dfe9339a9d1b0293283644c98a8f02010d70dbd37f66e7576d60e8"} Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.343317 4869 scope.go:117] "RemoveContainer" containerID="f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.365018 4869 scope.go:117] "RemoveContainer" containerID="e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.383003 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.389251 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.400585 4869 scope.go:117] "RemoveContainer" containerID="7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.443537 4869 scope.go:117] "RemoveContainer" containerID="f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" Feb 02 15:11:06 crc kubenswrapper[4869]: E0202 15:11:06.444044 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd\": container with ID starting with f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd not found: ID does not exist" containerID="f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.444171 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd"} err="failed to get container status \"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd\": rpc error: code = NotFound desc = could not find container \"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd\": container with ID starting with f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd not found: ID does not exist" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.444206 4869 scope.go:117] "RemoveContainer" containerID="e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f" Feb 02 15:11:06 crc kubenswrapper[4869]: E0202 15:11:06.444680 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f\": container with ID starting with e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f not found: ID does not exist" containerID="e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.444717 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f"} err="failed to get container status \"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f\": rpc error: code = NotFound desc = could not find container \"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f\": container with ID starting with e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f not found: ID does not exist" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.444734 4869 scope.go:117] "RemoveContainer" containerID="7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4" Feb 02 15:11:06 crc kubenswrapper[4869]: E0202 15:11:06.445466 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4\": container with ID starting with 7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4 not found: ID does not exist" containerID="7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.445493 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4"} err="failed to get container status \"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4\": rpc error: code = NotFound desc = could not find container \"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4\": container with ID starting with 7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4 not found: ID does not exist" Feb 02 15:11:07 crc kubenswrapper[4869]: I0202 15:11:07.474144 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" path="/var/lib/kubelet/pods/1ddeefe1-3e9c-4576-b226-e8c3b6462947/volumes" Feb 02 15:11:09 crc kubenswrapper[4869]: I0202 15:11:09.463690 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:11:09 crc kubenswrapper[4869]: E0202 15:11:09.464718 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:14 crc kubenswrapper[4869]: I0202 15:11:14.426382 4869 generic.go:334] "Generic (PLEG): container finished" podID="5ca847f3-12e0-43a7-af47-6739dc10627d" containerID="f0f59f64f18cd831b0ccbcfaeef9e58c704291972b6c59a787453f7131843bee" exitCode=0 Feb 02 15:11:14 crc kubenswrapper[4869]: I0202 15:11:14.426549 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" event={"ID":"5ca847f3-12e0-43a7-af47-6739dc10627d","Type":"ContainerDied","Data":"f0f59f64f18cd831b0ccbcfaeef9e58c704291972b6c59a787453f7131843bee"} Feb 02 15:11:15 crc kubenswrapper[4869]: I0202 15:11:15.890555 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003130 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003255 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003337 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.009324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph" (OuterVolumeSpecName: "ceph") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.009397 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42" (OuterVolumeSpecName: "kube-api-access-pvv42") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "kube-api-access-pvv42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.009931 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.029163 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.032309 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory" (OuterVolumeSpecName: "inventory") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106772 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106820 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106835 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106848 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106861 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.450389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" event={"ID":"5ca847f3-12e0-43a7-af47-6739dc10627d","Type":"ContainerDied","Data":"af2ea32d786cda13426e5b56227ed5b1f4953e3931b299286158fd837d86464e"} Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.450732 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af2ea32d786cda13426e5b56227ed5b1f4953e3931b299286158fd837d86464e" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.450818 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557230 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47"] Feb 02 15:11:16 crc kubenswrapper[4869]: E0202 15:11:16.557741 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca847f3-12e0-43a7-af47-6739dc10627d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557771 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca847f3-12e0-43a7-af47-6739dc10627d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:16 crc kubenswrapper[4869]: E0202 15:11:16.557819 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="extract-content" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557830 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="extract-content" Feb 02 15:11:16 crc kubenswrapper[4869]: E0202 15:11:16.557841 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="registry-server" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557848 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="registry-server" Feb 02 15:11:16 crc kubenswrapper[4869]: E0202 15:11:16.557867 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="extract-utilities" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557874 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="extract-utilities" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.558127 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="registry-server" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.558187 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca847f3-12e0-43a7-af47-6739dc10627d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.559380 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.563850 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.564631 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.564650 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.564741 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.565764 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.572567 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47"] Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.718628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.718681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.718876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.718967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.820850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.820947 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.820984 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.821120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.826340 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.827063 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.831499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.844434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.883110 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:17 crc kubenswrapper[4869]: I0202 15:11:17.441487 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47"] Feb 02 15:11:17 crc kubenswrapper[4869]: I0202 15:11:17.461280 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" event={"ID":"19c443c4-baed-4a61-bc6d-bc8ba528e326","Type":"ContainerStarted","Data":"acf88936080f9b69bbfc59ba61fe21d0d09c169098d92792d7fc2b90aac78878"} Feb 02 15:11:18 crc kubenswrapper[4869]: I0202 15:11:18.474398 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" event={"ID":"19c443c4-baed-4a61-bc6d-bc8ba528e326","Type":"ContainerStarted","Data":"cde84badc546ed3361ad6d70faccac9ff76362cd4f63c4e1c7c03f18d947a8d1"} Feb 02 15:11:18 crc kubenswrapper[4869]: I0202 15:11:18.510177 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" podStartSLOduration=1.963541355 podStartE2EDuration="2.510145819s" podCreationTimestamp="2026-02-02 15:11:16 +0000 UTC" firstStartedPulling="2026-02-02 15:11:17.442873537 +0000 UTC m=+2279.087510307" lastFinishedPulling="2026-02-02 15:11:17.989478001 +0000 UTC m=+2279.634114771" observedRunningTime="2026-02-02 15:11:18.495310295 +0000 UTC m=+2280.139947065" watchObservedRunningTime="2026-02-02 15:11:18.510145819 +0000 UTC m=+2280.154782609" Feb 02 15:11:21 crc kubenswrapper[4869]: I0202 15:11:21.464096 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:11:21 crc kubenswrapper[4869]: E0202 15:11:21.465260 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:30 crc kubenswrapper[4869]: I0202 15:11:30.242694 4869 scope.go:117] "RemoveContainer" containerID="f55a47c4ff2286da3a6e2327eb568bde4d649c547bbd0bd0f76ad0552dc9b592" Feb 02 15:11:30 crc kubenswrapper[4869]: I0202 15:11:30.295687 4869 scope.go:117] "RemoveContainer" containerID="38d7a89ad8dafd903d91d39613d610dcd9e24c5bf586ce35754a68930252625d" Feb 02 15:11:30 crc kubenswrapper[4869]: I0202 15:11:30.336690 4869 scope.go:117] "RemoveContainer" containerID="64ec45e26a2128c47c0bb7daf081c9f113c4f88a49f073769f3d890df34abd30" Feb 02 15:11:33 crc kubenswrapper[4869]: I0202 15:11:33.463771 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:11:33 crc kubenswrapper[4869]: E0202 15:11:33.464949 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.721576 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.724329 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.733278 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.890047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.890441 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.890570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.993156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.993391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.993478 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.994315 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.994429 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:40 crc kubenswrapper[4869]: I0202 15:11:40.025887 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:40 crc kubenswrapper[4869]: I0202 15:11:40.052602 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:40 crc kubenswrapper[4869]: I0202 15:11:40.585634 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:40 crc kubenswrapper[4869]: I0202 15:11:40.716530 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerStarted","Data":"53ae6a61a15772e781d210ab96db6151129525f1ece11bcdfe4cb307a47ab13a"} Feb 02 15:11:41 crc kubenswrapper[4869]: I0202 15:11:41.727180 4869 generic.go:334] "Generic (PLEG): container finished" podID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerID="4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52" exitCode=0 Feb 02 15:11:41 crc kubenswrapper[4869]: I0202 15:11:41.727244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerDied","Data":"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52"} Feb 02 15:11:42 crc kubenswrapper[4869]: I0202 15:11:42.741375 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerStarted","Data":"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5"} Feb 02 15:11:43 crc kubenswrapper[4869]: I0202 15:11:43.754054 4869 generic.go:334] "Generic (PLEG): container finished" podID="19c443c4-baed-4a61-bc6d-bc8ba528e326" containerID="cde84badc546ed3361ad6d70faccac9ff76362cd4f63c4e1c7c03f18d947a8d1" exitCode=0 Feb 02 15:11:43 crc kubenswrapper[4869]: I0202 15:11:43.754137 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" event={"ID":"19c443c4-baed-4a61-bc6d-bc8ba528e326","Type":"ContainerDied","Data":"cde84badc546ed3361ad6d70faccac9ff76362cd4f63c4e1c7c03f18d947a8d1"} Feb 02 15:11:43 crc kubenswrapper[4869]: I0202 15:11:43.759213 4869 generic.go:334] "Generic (PLEG): container finished" podID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerID="fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5" exitCode=0 Feb 02 15:11:43 crc kubenswrapper[4869]: I0202 15:11:43.759293 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerDied","Data":"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5"} Feb 02 15:11:44 crc kubenswrapper[4869]: I0202 15:11:44.771241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerStarted","Data":"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145"} Feb 02 15:11:44 crc kubenswrapper[4869]: I0202 15:11:44.813250 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tzvff" podStartSLOduration=3.013288265 podStartE2EDuration="5.813219986s" podCreationTimestamp="2026-02-02 15:11:39 +0000 UTC" firstStartedPulling="2026-02-02 15:11:41.729590678 +0000 UTC m=+2303.374227458" lastFinishedPulling="2026-02-02 15:11:44.529522409 +0000 UTC m=+2306.174159179" observedRunningTime="2026-02-02 15:11:44.801282143 +0000 UTC m=+2306.445918923" watchObservedRunningTime="2026-02-02 15:11:44.813219986 +0000 UTC m=+2306.457856756" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.262779 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.421887 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") pod \"19c443c4-baed-4a61-bc6d-bc8ba528e326\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.422056 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") pod \"19c443c4-baed-4a61-bc6d-bc8ba528e326\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.422322 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") pod \"19c443c4-baed-4a61-bc6d-bc8ba528e326\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.422414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") pod \"19c443c4-baed-4a61-bc6d-bc8ba528e326\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.430216 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78" (OuterVolumeSpecName: "kube-api-access-nfz78") pod "19c443c4-baed-4a61-bc6d-bc8ba528e326" (UID: "19c443c4-baed-4a61-bc6d-bc8ba528e326"). InnerVolumeSpecName "kube-api-access-nfz78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.431688 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph" (OuterVolumeSpecName: "ceph") pod "19c443c4-baed-4a61-bc6d-bc8ba528e326" (UID: "19c443c4-baed-4a61-bc6d-bc8ba528e326"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.461248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory" (OuterVolumeSpecName: "inventory") pod "19c443c4-baed-4a61-bc6d-bc8ba528e326" (UID: "19c443c4-baed-4a61-bc6d-bc8ba528e326"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.470499 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "19c443c4-baed-4a61-bc6d-bc8ba528e326" (UID: "19c443c4-baed-4a61-bc6d-bc8ba528e326"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.525221 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.525287 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.525299 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.525314 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.784528 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.784545 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" event={"ID":"19c443c4-baed-4a61-bc6d-bc8ba528e326","Type":"ContainerDied","Data":"acf88936080f9b69bbfc59ba61fe21d0d09c169098d92792d7fc2b90aac78878"} Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.784648 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf88936080f9b69bbfc59ba61fe21d0d09c169098d92792d7fc2b90aac78878" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.881074 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr"] Feb 02 15:11:45 crc kubenswrapper[4869]: E0202 15:11:45.881490 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c443c4-baed-4a61-bc6d-bc8ba528e326" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.881508 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c443c4-baed-4a61-bc6d-bc8ba528e326" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.881683 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c443c4-baed-4a61-bc6d-bc8ba528e326" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.882351 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.884584 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.884737 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.884933 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.885056 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.887069 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.898803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr"] Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.036126 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.036210 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.036398 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.037029 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.140001 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.140177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.140252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.140293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.145804 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.147213 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.148436 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.165242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.202057 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: W0202 15:11:46.753217 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34077009_4156_4523_9f51_24147190e39c.slice/crio-a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92 WatchSource:0}: Error finding container a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92: Status 404 returned error can't find the container with id a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92 Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.755328 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr"] Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.796180 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" event={"ID":"34077009-4156-4523-9f51-24147190e39c","Type":"ContainerStarted","Data":"a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92"} Feb 02 15:11:47 crc kubenswrapper[4869]: I0202 15:11:47.463506 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:11:47 crc kubenswrapper[4869]: E0202 15:11:47.464306 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:47 crc kubenswrapper[4869]: I0202 15:11:47.813396 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" event={"ID":"34077009-4156-4523-9f51-24147190e39c","Type":"ContainerStarted","Data":"9526758d149497a69e282bca21d274216371b7965602b112ae44ab9d019d3b69"} Feb 02 15:11:47 crc kubenswrapper[4869]: I0202 15:11:47.843710 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" podStartSLOduration=2.400332407 podStartE2EDuration="2.8436878s" podCreationTimestamp="2026-02-02 15:11:45 +0000 UTC" firstStartedPulling="2026-02-02 15:11:46.756652034 +0000 UTC m=+2308.401288814" lastFinishedPulling="2026-02-02 15:11:47.200007427 +0000 UTC m=+2308.844644207" observedRunningTime="2026-02-02 15:11:47.841843975 +0000 UTC m=+2309.486480785" watchObservedRunningTime="2026-02-02 15:11:47.8436878 +0000 UTC m=+2309.488324570" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.053710 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.054269 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.112254 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.927844 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.992221 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:52 crc kubenswrapper[4869]: I0202 15:11:52.877999 4869 generic.go:334] "Generic (PLEG): container finished" podID="34077009-4156-4523-9f51-24147190e39c" containerID="9526758d149497a69e282bca21d274216371b7965602b112ae44ab9d019d3b69" exitCode=0 Feb 02 15:11:52 crc kubenswrapper[4869]: I0202 15:11:52.878171 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" event={"ID":"34077009-4156-4523-9f51-24147190e39c","Type":"ContainerDied","Data":"9526758d149497a69e282bca21d274216371b7965602b112ae44ab9d019d3b69"} Feb 02 15:11:52 crc kubenswrapper[4869]: I0202 15:11:52.880371 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tzvff" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="registry-server" containerID="cri-o://538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" gracePeriod=2 Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.344569 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.539352 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") pod \"593827cf-cb4f-4ce4-9600-ed91af9aca43\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.539697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") pod \"593827cf-cb4f-4ce4-9600-ed91af9aca43\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.540422 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") pod \"593827cf-cb4f-4ce4-9600-ed91af9aca43\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.541245 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities" (OuterVolumeSpecName: "utilities") pod "593827cf-cb4f-4ce4-9600-ed91af9aca43" (UID: "593827cf-cb4f-4ce4-9600-ed91af9aca43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.541646 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.549884 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86" (OuterVolumeSpecName: "kube-api-access-rmx86") pod "593827cf-cb4f-4ce4-9600-ed91af9aca43" (UID: "593827cf-cb4f-4ce4-9600-ed91af9aca43"). InnerVolumeSpecName "kube-api-access-rmx86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.564503 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "593827cf-cb4f-4ce4-9600-ed91af9aca43" (UID: "593827cf-cb4f-4ce4-9600-ed91af9aca43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.644150 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.644199 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.919020 4869 generic.go:334] "Generic (PLEG): container finished" podID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerID="538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" exitCode=0 Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.919111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerDied","Data":"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145"} Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.919222 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerDied","Data":"53ae6a61a15772e781d210ab96db6151129525f1ece11bcdfe4cb307a47ab13a"} Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.919250 4869 scope.go:117] "RemoveContainer" containerID="538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.920601 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.949549 4869 scope.go:117] "RemoveContainer" containerID="fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.971101 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.982009 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.026168 4869 scope.go:117] "RemoveContainer" containerID="4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.048322 4869 scope.go:117] "RemoveContainer" containerID="538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" Feb 02 15:11:54 crc kubenswrapper[4869]: E0202 15:11:54.055368 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145\": container with ID starting with 538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145 not found: ID does not exist" containerID="538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.055404 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145"} err="failed to get container status \"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145\": rpc error: code = NotFound desc = could not find container \"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145\": container with ID starting with 538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145 not found: ID does not exist" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.055430 4869 scope.go:117] "RemoveContainer" containerID="fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5" Feb 02 15:11:54 crc kubenswrapper[4869]: E0202 15:11:54.056770 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5\": container with ID starting with fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5 not found: ID does not exist" containerID="fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.056793 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5"} err="failed to get container status \"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5\": rpc error: code = NotFound desc = could not find container \"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5\": container with ID starting with fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5 not found: ID does not exist" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.056807 4869 scope.go:117] "RemoveContainer" containerID="4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52" Feb 02 15:11:54 crc kubenswrapper[4869]: E0202 15:11:54.058198 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52\": container with ID starting with 4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52 not found: ID does not exist" containerID="4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.058327 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52"} err="failed to get container status \"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52\": rpc error: code = NotFound desc = could not find container \"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52\": container with ID starting with 4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52 not found: ID does not exist" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.388929 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.561844 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") pod \"34077009-4156-4523-9f51-24147190e39c\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.562009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") pod \"34077009-4156-4523-9f51-24147190e39c\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.562126 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") pod \"34077009-4156-4523-9f51-24147190e39c\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.562221 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") pod \"34077009-4156-4523-9f51-24147190e39c\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.568261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph" (OuterVolumeSpecName: "ceph") pod "34077009-4156-4523-9f51-24147190e39c" (UID: "34077009-4156-4523-9f51-24147190e39c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.570067 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf" (OuterVolumeSpecName: "kube-api-access-mqfxf") pod "34077009-4156-4523-9f51-24147190e39c" (UID: "34077009-4156-4523-9f51-24147190e39c"). InnerVolumeSpecName "kube-api-access-mqfxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.607475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34077009-4156-4523-9f51-24147190e39c" (UID: "34077009-4156-4523-9f51-24147190e39c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.610531 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory" (OuterVolumeSpecName: "inventory") pod "34077009-4156-4523-9f51-24147190e39c" (UID: "34077009-4156-4523-9f51-24147190e39c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.667778 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.667818 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.667836 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.667850 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.936939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" event={"ID":"34077009-4156-4523-9f51-24147190e39c","Type":"ContainerDied","Data":"a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92"} Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.936982 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.936997 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.017361 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc"] Feb 02 15:11:55 crc kubenswrapper[4869]: E0202 15:11:55.017941 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="extract-content" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.017962 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="extract-content" Feb 02 15:11:55 crc kubenswrapper[4869]: E0202 15:11:55.017984 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="extract-utilities" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.017992 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="extract-utilities" Feb 02 15:11:55 crc kubenswrapper[4869]: E0202 15:11:55.018016 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="registry-server" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.018024 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="registry-server" Feb 02 15:11:55 crc kubenswrapper[4869]: E0202 15:11:55.018038 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34077009-4156-4523-9f51-24147190e39c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.018047 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="34077009-4156-4523-9f51-24147190e39c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.018247 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="registry-server" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.018261 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="34077009-4156-4523-9f51-24147190e39c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.019113 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.021648 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.022208 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.022330 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.021721 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.034504 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.056797 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc"] Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.178489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.179432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.179580 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.179702 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.281539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.281711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.281840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.282136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.287821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.288478 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.288500 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.302347 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.346969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.482007 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" path="/var/lib/kubelet/pods/593827cf-cb4f-4ce4-9600-ed91af9aca43/volumes" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.926325 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc"] Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.950834 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" event={"ID":"04202cce-c3c1-483c-9d50-0fcf9a398094","Type":"ContainerStarted","Data":"384162117dc63ce3f5a7c9c83a29a570f7ffbffa8a5d5c4c94f7c36292e790fc"} Feb 02 15:11:56 crc kubenswrapper[4869]: I0202 15:11:56.964474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" event={"ID":"04202cce-c3c1-483c-9d50-0fcf9a398094","Type":"ContainerStarted","Data":"8132da2ec517a8421d696587dbb443e080c1257379cee4569885d339f8cbd656"} Feb 02 15:11:57 crc kubenswrapper[4869]: I0202 15:11:57.000812 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" podStartSLOduration=2.570934524 podStartE2EDuration="3.000782345s" podCreationTimestamp="2026-02-02 15:11:54 +0000 UTC" firstStartedPulling="2026-02-02 15:11:55.932360504 +0000 UTC m=+2317.576997294" lastFinishedPulling="2026-02-02 15:11:56.362208315 +0000 UTC m=+2318.006845115" observedRunningTime="2026-02-02 15:11:56.993648329 +0000 UTC m=+2318.638285189" watchObservedRunningTime="2026-02-02 15:11:57.000782345 +0000 UTC m=+2318.645419145" Feb 02 15:12:02 crc kubenswrapper[4869]: I0202 15:12:02.463705 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:02 crc kubenswrapper[4869]: E0202 15:12:02.464904 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:12:16 crc kubenswrapper[4869]: I0202 15:12:16.462510 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:16 crc kubenswrapper[4869]: E0202 15:12:16.463397 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:12:27 crc kubenswrapper[4869]: I0202 15:12:27.463672 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:27 crc kubenswrapper[4869]: E0202 15:12:27.464997 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:12:31 crc kubenswrapper[4869]: I0202 15:12:31.332686 4869 generic.go:334] "Generic (PLEG): container finished" podID="04202cce-c3c1-483c-9d50-0fcf9a398094" containerID="8132da2ec517a8421d696587dbb443e080c1257379cee4569885d339f8cbd656" exitCode=0 Feb 02 15:12:31 crc kubenswrapper[4869]: I0202 15:12:31.333012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" event={"ID":"04202cce-c3c1-483c-9d50-0fcf9a398094","Type":"ContainerDied","Data":"8132da2ec517a8421d696587dbb443e080c1257379cee4569885d339f8cbd656"} Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.827175 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.878254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") pod \"04202cce-c3c1-483c-9d50-0fcf9a398094\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.878352 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") pod \"04202cce-c3c1-483c-9d50-0fcf9a398094\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.878412 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") pod \"04202cce-c3c1-483c-9d50-0fcf9a398094\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.878523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") pod \"04202cce-c3c1-483c-9d50-0fcf9a398094\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.886248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb" (OuterVolumeSpecName: "kube-api-access-9nsmb") pod "04202cce-c3c1-483c-9d50-0fcf9a398094" (UID: "04202cce-c3c1-483c-9d50-0fcf9a398094"). InnerVolumeSpecName "kube-api-access-9nsmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.886688 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph" (OuterVolumeSpecName: "ceph") pod "04202cce-c3c1-483c-9d50-0fcf9a398094" (UID: "04202cce-c3c1-483c-9d50-0fcf9a398094"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.916631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "04202cce-c3c1-483c-9d50-0fcf9a398094" (UID: "04202cce-c3c1-483c-9d50-0fcf9a398094"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.927764 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory" (OuterVolumeSpecName: "inventory") pod "04202cce-c3c1-483c-9d50-0fcf9a398094" (UID: "04202cce-c3c1-483c-9d50-0fcf9a398094"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.981580 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.981630 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.981651 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.981673 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.356444 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" event={"ID":"04202cce-c3c1-483c-9d50-0fcf9a398094","Type":"ContainerDied","Data":"384162117dc63ce3f5a7c9c83a29a570f7ffbffa8a5d5c4c94f7c36292e790fc"} Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.356508 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="384162117dc63ce3f5a7c9c83a29a570f7ffbffa8a5d5c4c94f7c36292e790fc" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.356866 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.454265 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh"] Feb 02 15:12:33 crc kubenswrapper[4869]: E0202 15:12:33.454662 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04202cce-c3c1-483c-9d50-0fcf9a398094" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.454681 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="04202cce-c3c1-483c-9d50-0fcf9a398094" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.454859 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="04202cce-c3c1-483c-9d50-0fcf9a398094" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.455464 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.457733 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.458635 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.458963 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.460520 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.461565 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.482596 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh"] Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.497142 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.497295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.497466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.498681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.600655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.600797 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.601014 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.601110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.606828 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.607339 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.609206 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.621612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.779202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:34 crc kubenswrapper[4869]: I0202 15:12:34.339734 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh"] Feb 02 15:12:34 crc kubenswrapper[4869]: I0202 15:12:34.367298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" event={"ID":"67cb4a99-39e2-4e00-88f5-748ad16cb874","Type":"ContainerStarted","Data":"e70061b2b29f5065618bdcae2caaf357d73c1f036f2f96b3530b6e8204f68716"} Feb 02 15:12:35 crc kubenswrapper[4869]: I0202 15:12:35.378350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" event={"ID":"67cb4a99-39e2-4e00-88f5-748ad16cb874","Type":"ContainerStarted","Data":"0a03f366dd3f2f3e065cb5cc8356200cdb3cd9ea6e0dfdc460968a29d9e33f18"} Feb 02 15:12:35 crc kubenswrapper[4869]: I0202 15:12:35.397190 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" podStartSLOduration=1.8771394940000001 podStartE2EDuration="2.397172097s" podCreationTimestamp="2026-02-02 15:12:33 +0000 UTC" firstStartedPulling="2026-02-02 15:12:34.349390812 +0000 UTC m=+2355.994027622" lastFinishedPulling="2026-02-02 15:12:34.869423405 +0000 UTC m=+2356.514060225" observedRunningTime="2026-02-02 15:12:35.395985118 +0000 UTC m=+2357.040621888" watchObservedRunningTime="2026-02-02 15:12:35.397172097 +0000 UTC m=+2357.041808867" Feb 02 15:12:39 crc kubenswrapper[4869]: I0202 15:12:39.416018 4869 generic.go:334] "Generic (PLEG): container finished" podID="67cb4a99-39e2-4e00-88f5-748ad16cb874" containerID="0a03f366dd3f2f3e065cb5cc8356200cdb3cd9ea6e0dfdc460968a29d9e33f18" exitCode=0 Feb 02 15:12:39 crc kubenswrapper[4869]: I0202 15:12:39.416184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" event={"ID":"67cb4a99-39e2-4e00-88f5-748ad16cb874","Type":"ContainerDied","Data":"0a03f366dd3f2f3e065cb5cc8356200cdb3cd9ea6e0dfdc460968a29d9e33f18"} Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.831068 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.990239 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") pod \"67cb4a99-39e2-4e00-88f5-748ad16cb874\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.990820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") pod \"67cb4a99-39e2-4e00-88f5-748ad16cb874\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.990929 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") pod \"67cb4a99-39e2-4e00-88f5-748ad16cb874\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.991139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") pod \"67cb4a99-39e2-4e00-88f5-748ad16cb874\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.998198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw" (OuterVolumeSpecName: "kube-api-access-plcrw") pod "67cb4a99-39e2-4e00-88f5-748ad16cb874" (UID: "67cb4a99-39e2-4e00-88f5-748ad16cb874"). InnerVolumeSpecName "kube-api-access-plcrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.000630 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph" (OuterVolumeSpecName: "ceph") pod "67cb4a99-39e2-4e00-88f5-748ad16cb874" (UID: "67cb4a99-39e2-4e00-88f5-748ad16cb874"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.024811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory" (OuterVolumeSpecName: "inventory") pod "67cb4a99-39e2-4e00-88f5-748ad16cb874" (UID: "67cb4a99-39e2-4e00-88f5-748ad16cb874"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.037122 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67cb4a99-39e2-4e00-88f5-748ad16cb874" (UID: "67cb4a99-39e2-4e00-88f5-748ad16cb874"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.094637 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.094705 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.094730 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.094750 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.444437 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" event={"ID":"67cb4a99-39e2-4e00-88f5-748ad16cb874","Type":"ContainerDied","Data":"e70061b2b29f5065618bdcae2caaf357d73c1f036f2f96b3530b6e8204f68716"} Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.444773 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e70061b2b29f5065618bdcae2caaf357d73c1f036f2f96b3530b6e8204f68716" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.444613 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.468550 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:41 crc kubenswrapper[4869]: E0202 15:12:41.471731 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.568339 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7"] Feb 02 15:12:41 crc kubenswrapper[4869]: E0202 15:12:41.568899 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cb4a99-39e2-4e00-88f5-748ad16cb874" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.568940 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cb4a99-39e2-4e00-88f5-748ad16cb874" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.569230 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cb4a99-39e2-4e00-88f5-748ad16cb874" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.570123 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.573659 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.573944 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.576068 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.576359 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.576378 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.583266 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7"] Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.706816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.706888 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.706986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.707055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.809012 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.809064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.809098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.809137 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.814584 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.815024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.815319 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.831875 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.890707 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:42 crc kubenswrapper[4869]: I0202 15:12:42.586291 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7"] Feb 02 15:12:43 crc kubenswrapper[4869]: I0202 15:12:43.493794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" event={"ID":"c94bd387-2568-4bea-a5be-0ff99e224681","Type":"ContainerStarted","Data":"759c19505a2a8a42dbbdd7a11a5d888506d9194c5d1b15b5a57a7a84f3e26fae"} Feb 02 15:12:43 crc kubenswrapper[4869]: I0202 15:12:43.494306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" event={"ID":"c94bd387-2568-4bea-a5be-0ff99e224681","Type":"ContainerStarted","Data":"6ee92e7c158290ca464863c53fe2dee50e2c9d4e8740b867bfefd6e98d2bfc5d"} Feb 02 15:12:43 crc kubenswrapper[4869]: I0202 15:12:43.502789 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" podStartSLOduration=2.067260616 podStartE2EDuration="2.502764836s" podCreationTimestamp="2026-02-02 15:12:41 +0000 UTC" firstStartedPulling="2026-02-02 15:12:42.590189297 +0000 UTC m=+2364.234826067" lastFinishedPulling="2026-02-02 15:12:43.025693517 +0000 UTC m=+2364.670330287" observedRunningTime="2026-02-02 15:12:43.498213825 +0000 UTC m=+2365.142850645" watchObservedRunningTime="2026-02-02 15:12:43.502764836 +0000 UTC m=+2365.147401606" Feb 02 15:12:53 crc kubenswrapper[4869]: I0202 15:12:53.464138 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:53 crc kubenswrapper[4869]: E0202 15:12:53.465100 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:04 crc kubenswrapper[4869]: I0202 15:13:04.463079 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:13:04 crc kubenswrapper[4869]: E0202 15:13:04.464479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:19 crc kubenswrapper[4869]: I0202 15:13:19.470038 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:13:19 crc kubenswrapper[4869]: E0202 15:13:19.471011 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:23 crc kubenswrapper[4869]: I0202 15:13:23.957846 4869 generic.go:334] "Generic (PLEG): container finished" podID="c94bd387-2568-4bea-a5be-0ff99e224681" containerID="759c19505a2a8a42dbbdd7a11a5d888506d9194c5d1b15b5a57a7a84f3e26fae" exitCode=0 Feb 02 15:13:23 crc kubenswrapper[4869]: I0202 15:13:23.957898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" event={"ID":"c94bd387-2568-4bea-a5be-0ff99e224681","Type":"ContainerDied","Data":"759c19505a2a8a42dbbdd7a11a5d888506d9194c5d1b15b5a57a7a84f3e26fae"} Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.451809 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.527747 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") pod \"c94bd387-2568-4bea-a5be-0ff99e224681\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.528073 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") pod \"c94bd387-2568-4bea-a5be-0ff99e224681\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.528196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") pod \"c94bd387-2568-4bea-a5be-0ff99e224681\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.528237 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") pod \"c94bd387-2568-4bea-a5be-0ff99e224681\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.534704 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph" (OuterVolumeSpecName: "ceph") pod "c94bd387-2568-4bea-a5be-0ff99e224681" (UID: "c94bd387-2568-4bea-a5be-0ff99e224681"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.535576 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg" (OuterVolumeSpecName: "kube-api-access-7h8pg") pod "c94bd387-2568-4bea-a5be-0ff99e224681" (UID: "c94bd387-2568-4bea-a5be-0ff99e224681"). InnerVolumeSpecName "kube-api-access-7h8pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.555851 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c94bd387-2568-4bea-a5be-0ff99e224681" (UID: "c94bd387-2568-4bea-a5be-0ff99e224681"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.568729 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory" (OuterVolumeSpecName: "inventory") pod "c94bd387-2568-4bea-a5be-0ff99e224681" (UID: "c94bd387-2568-4bea-a5be-0ff99e224681"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.630923 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.631437 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.631548 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.631651 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.980703 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" event={"ID":"c94bd387-2568-4bea-a5be-0ff99e224681","Type":"ContainerDied","Data":"6ee92e7c158290ca464863c53fe2dee50e2c9d4e8740b867bfefd6e98d2bfc5d"} Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.980757 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee92e7c158290ca464863c53fe2dee50e2c9d4e8740b867bfefd6e98d2bfc5d" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.980815 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.097518 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-v2kr2"] Feb 02 15:13:26 crc kubenswrapper[4869]: E0202 15:13:26.097945 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94bd387-2568-4bea-a5be-0ff99e224681" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.097965 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94bd387-2568-4bea-a5be-0ff99e224681" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.098174 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94bd387-2568-4bea-a5be-0ff99e224681" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.098831 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.105217 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.105433 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.105639 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.105783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.107236 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.133374 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-v2kr2"] Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.243545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.243625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.243685 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.243771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.345388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.345519 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.345544 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.345563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.350566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.350986 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.353627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.372874 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.422688 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:27 crc kubenswrapper[4869]: I0202 15:13:27.000350 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-v2kr2"] Feb 02 15:13:28 crc kubenswrapper[4869]: I0202 15:13:27.999951 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" event={"ID":"3d624d16-2868-4154-a700-18e0cebe9357","Type":"ContainerStarted","Data":"ed3824d3864ea5a68a0a844944e9bafe167d7822db38d412f8ef322577714f18"} Feb 02 15:13:30 crc kubenswrapper[4869]: I0202 15:13:30.029176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" event={"ID":"3d624d16-2868-4154-a700-18e0cebe9357","Type":"ContainerStarted","Data":"08e1c40b2e7846b53264c3b23a65d32033fecad9b3eae45135d7df8ce84b7913"} Feb 02 15:13:30 crc kubenswrapper[4869]: I0202 15:13:30.066397 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" podStartSLOduration=2.336521741 podStartE2EDuration="4.06636739s" podCreationTimestamp="2026-02-02 15:13:26 +0000 UTC" firstStartedPulling="2026-02-02 15:13:27.007785667 +0000 UTC m=+2408.652422437" lastFinishedPulling="2026-02-02 15:13:28.737631316 +0000 UTC m=+2410.382268086" observedRunningTime="2026-02-02 15:13:30.058286012 +0000 UTC m=+2411.702922832" watchObservedRunningTime="2026-02-02 15:13:30.06636739 +0000 UTC m=+2411.711004200" Feb 02 15:13:33 crc kubenswrapper[4869]: I0202 15:13:33.481729 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:13:33 crc kubenswrapper[4869]: E0202 15:13:33.483233 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:38 crc kubenswrapper[4869]: I0202 15:13:38.110646 4869 generic.go:334] "Generic (PLEG): container finished" podID="3d624d16-2868-4154-a700-18e0cebe9357" containerID="08e1c40b2e7846b53264c3b23a65d32033fecad9b3eae45135d7df8ce84b7913" exitCode=0 Feb 02 15:13:38 crc kubenswrapper[4869]: I0202 15:13:38.110713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" event={"ID":"3d624d16-2868-4154-a700-18e0cebe9357","Type":"ContainerDied","Data":"08e1c40b2e7846b53264c3b23a65d32033fecad9b3eae45135d7df8ce84b7913"} Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.542332 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.654426 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") pod \"3d624d16-2868-4154-a700-18e0cebe9357\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.654542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") pod \"3d624d16-2868-4154-a700-18e0cebe9357\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.654588 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") pod \"3d624d16-2868-4154-a700-18e0cebe9357\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.654751 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") pod \"3d624d16-2868-4154-a700-18e0cebe9357\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.663146 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph" (OuterVolumeSpecName: "ceph") pod "3d624d16-2868-4154-a700-18e0cebe9357" (UID: "3d624d16-2868-4154-a700-18e0cebe9357"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.670530 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m" (OuterVolumeSpecName: "kube-api-access-ptm7m") pod "3d624d16-2868-4154-a700-18e0cebe9357" (UID: "3d624d16-2868-4154-a700-18e0cebe9357"). InnerVolumeSpecName "kube-api-access-ptm7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.693981 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3d624d16-2868-4154-a700-18e0cebe9357" (UID: "3d624d16-2868-4154-a700-18e0cebe9357"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.701485 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d624d16-2868-4154-a700-18e0cebe9357" (UID: "3d624d16-2868-4154-a700-18e0cebe9357"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.758219 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.758540 4869 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.758664 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.758775 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.157945 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" event={"ID":"3d624d16-2868-4154-a700-18e0cebe9357","Type":"ContainerDied","Data":"ed3824d3864ea5a68a0a844944e9bafe167d7822db38d412f8ef322577714f18"} Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.157997 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3824d3864ea5a68a0a844944e9bafe167d7822db38d412f8ef322577714f18" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.158052 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.237755 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll"] Feb 02 15:13:40 crc kubenswrapper[4869]: E0202 15:13:40.238970 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d624d16-2868-4154-a700-18e0cebe9357" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.238999 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d624d16-2868-4154-a700-18e0cebe9357" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.239193 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d624d16-2868-4154-a700-18e0cebe9357" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.240098 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.243002 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.243219 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.243886 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.244022 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.244091 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.245810 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll"] Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.393283 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.393368 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.393466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.393529 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.496278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.496489 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.496564 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.496700 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.501184 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.502625 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.503620 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.519404 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.610269 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:41 crc kubenswrapper[4869]: I0202 15:13:41.215679 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll"] Feb 02 15:13:42 crc kubenswrapper[4869]: I0202 15:13:42.177467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" event={"ID":"4b9e0145-82e1-4dde-a4d2-d17e482d01b7","Type":"ContainerStarted","Data":"1a95623b36d22362083338868e5acb8f7d45c23a0142c51aa658536f6263aa2b"} Feb 02 15:13:42 crc kubenswrapper[4869]: I0202 15:13:42.177894 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" event={"ID":"4b9e0145-82e1-4dde-a4d2-d17e482d01b7","Type":"ContainerStarted","Data":"c3ac866d9a007493767fa28660cab10ef1d367d0e5d2eaa4ec0b49c766bef778"} Feb 02 15:13:42 crc kubenswrapper[4869]: I0202 15:13:42.204036 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" podStartSLOduration=1.732455223 podStartE2EDuration="2.204017726s" podCreationTimestamp="2026-02-02 15:13:40 +0000 UTC" firstStartedPulling="2026-02-02 15:13:41.217672399 +0000 UTC m=+2422.862309169" lastFinishedPulling="2026-02-02 15:13:41.689234902 +0000 UTC m=+2423.333871672" observedRunningTime="2026-02-02 15:13:42.198537982 +0000 UTC m=+2423.843174752" watchObservedRunningTime="2026-02-02 15:13:42.204017726 +0000 UTC m=+2423.848654496" Feb 02 15:13:47 crc kubenswrapper[4869]: I0202 15:13:47.463067 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:13:47 crc kubenswrapper[4869]: E0202 15:13:47.463804 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:51 crc kubenswrapper[4869]: I0202 15:13:51.268532 4869 generic.go:334] "Generic (PLEG): container finished" podID="4b9e0145-82e1-4dde-a4d2-d17e482d01b7" containerID="1a95623b36d22362083338868e5acb8f7d45c23a0142c51aa658536f6263aa2b" exitCode=0 Feb 02 15:13:51 crc kubenswrapper[4869]: I0202 15:13:51.269190 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" event={"ID":"4b9e0145-82e1-4dde-a4d2-d17e482d01b7","Type":"ContainerDied","Data":"1a95623b36d22362083338868e5acb8f7d45c23a0142c51aa658536f6263aa2b"} Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.686487 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.808549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") pod \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.808608 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") pod \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.808904 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") pod \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.808995 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") pod \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.818824 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph" (OuterVolumeSpecName: "ceph") pod "4b9e0145-82e1-4dde-a4d2-d17e482d01b7" (UID: "4b9e0145-82e1-4dde-a4d2-d17e482d01b7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.821467 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm" (OuterVolumeSpecName: "kube-api-access-vw4hm") pod "4b9e0145-82e1-4dde-a4d2-d17e482d01b7" (UID: "4b9e0145-82e1-4dde-a4d2-d17e482d01b7"). InnerVolumeSpecName "kube-api-access-vw4hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.856149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory" (OuterVolumeSpecName: "inventory") pod "4b9e0145-82e1-4dde-a4d2-d17e482d01b7" (UID: "4b9e0145-82e1-4dde-a4d2-d17e482d01b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.866081 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4b9e0145-82e1-4dde-a4d2-d17e482d01b7" (UID: "4b9e0145-82e1-4dde-a4d2-d17e482d01b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.910785 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.910974 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.911060 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.911160 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.285201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" event={"ID":"4b9e0145-82e1-4dde-a4d2-d17e482d01b7","Type":"ContainerDied","Data":"c3ac866d9a007493767fa28660cab10ef1d367d0e5d2eaa4ec0b49c766bef778"} Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.285240 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ac866d9a007493767fa28660cab10ef1d367d0e5d2eaa4ec0b49c766bef778" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.285263 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.521545 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97"] Feb 02 15:13:53 crc kubenswrapper[4869]: E0202 15:13:53.522336 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9e0145-82e1-4dde-a4d2-d17e482d01b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.522384 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9e0145-82e1-4dde-a4d2-d17e482d01b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.522788 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9e0145-82e1-4dde-a4d2-d17e482d01b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.524232 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.528642 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.529097 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.529532 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.529797 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.530258 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.555509 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97"] Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.627524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.627876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.627986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.628339 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.731510 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.731698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.731894 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.732004 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.740831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.741405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.743309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.773988 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.877169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:54 crc kubenswrapper[4869]: I0202 15:13:54.269859 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97"] Feb 02 15:13:54 crc kubenswrapper[4869]: I0202 15:13:54.294596 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" event={"ID":"9ef6ee1c-f8bc-4060-8922-945b20187dfb","Type":"ContainerStarted","Data":"342ed92adcce5b144f2ee266e86695c4606bd9853d88bede3eb67bb1e01d4da3"} Feb 02 15:13:55 crc kubenswrapper[4869]: I0202 15:13:55.311061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" event={"ID":"9ef6ee1c-f8bc-4060-8922-945b20187dfb","Type":"ContainerStarted","Data":"dd4eb40a25a63694253db80af6c7246ae78d3e8e3f770e2c96c6a5985aa11028"} Feb 02 15:13:55 crc kubenswrapper[4869]: I0202 15:13:55.341714 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" podStartSLOduration=1.906779507 podStartE2EDuration="2.341685702s" podCreationTimestamp="2026-02-02 15:13:53 +0000 UTC" firstStartedPulling="2026-02-02 15:13:54.274200715 +0000 UTC m=+2435.918837485" lastFinishedPulling="2026-02-02 15:13:54.70910687 +0000 UTC m=+2436.353743680" observedRunningTime="2026-02-02 15:13:55.337761036 +0000 UTC m=+2436.982397856" watchObservedRunningTime="2026-02-02 15:13:55.341685702 +0000 UTC m=+2436.986322472" Feb 02 15:14:02 crc kubenswrapper[4869]: I0202 15:14:02.462530 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:02 crc kubenswrapper[4869]: E0202 15:14:02.463379 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:14:04 crc kubenswrapper[4869]: I0202 15:14:04.405254 4869 generic.go:334] "Generic (PLEG): container finished" podID="9ef6ee1c-f8bc-4060-8922-945b20187dfb" containerID="dd4eb40a25a63694253db80af6c7246ae78d3e8e3f770e2c96c6a5985aa11028" exitCode=0 Feb 02 15:14:04 crc kubenswrapper[4869]: I0202 15:14:04.405343 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" event={"ID":"9ef6ee1c-f8bc-4060-8922-945b20187dfb","Type":"ContainerDied","Data":"dd4eb40a25a63694253db80af6c7246ae78d3e8e3f770e2c96c6a5985aa11028"} Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.858601 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.911932 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") pod \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.912002 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") pod \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.912201 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") pod \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.912259 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") pod \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.919477 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph" (OuterVolumeSpecName: "ceph") pod "9ef6ee1c-f8bc-4060-8922-945b20187dfb" (UID: "9ef6ee1c-f8bc-4060-8922-945b20187dfb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.922343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d" (OuterVolumeSpecName: "kube-api-access-rvk2d") pod "9ef6ee1c-f8bc-4060-8922-945b20187dfb" (UID: "9ef6ee1c-f8bc-4060-8922-945b20187dfb"). InnerVolumeSpecName "kube-api-access-rvk2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.942100 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory" (OuterVolumeSpecName: "inventory") pod "9ef6ee1c-f8bc-4060-8922-945b20187dfb" (UID: "9ef6ee1c-f8bc-4060-8922-945b20187dfb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.954598 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ef6ee1c-f8bc-4060-8922-945b20187dfb" (UID: "9ef6ee1c-f8bc-4060-8922-945b20187dfb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.014163 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.014407 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.014471 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.014533 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.433324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" event={"ID":"9ef6ee1c-f8bc-4060-8922-945b20187dfb","Type":"ContainerDied","Data":"342ed92adcce5b144f2ee266e86695c4606bd9853d88bede3eb67bb1e01d4da3"} Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.433406 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="342ed92adcce5b144f2ee266e86695c4606bd9853d88bede3eb67bb1e01d4da3" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.433367 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.643027 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g"] Feb 02 15:14:06 crc kubenswrapper[4869]: E0202 15:14:06.643527 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef6ee1c-f8bc-4060-8922-945b20187dfb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.643556 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef6ee1c-f8bc-4060-8922-945b20187dfb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.643824 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef6ee1c-f8bc-4060-8922-945b20187dfb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.644712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.648064 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.649137 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.649794 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.650142 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.650407 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.650644 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.651067 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.651333 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.672602 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g"] Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727818 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727863 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727992 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728022 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728095 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728144 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728170 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728205 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728225 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.829853 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830259 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830347 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830489 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.838164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.839670 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.839990 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.841146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.841308 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.841725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.842000 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.846035 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.847065 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.862133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.866789 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.866940 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.868768 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.977712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:07 crc kubenswrapper[4869]: I0202 15:14:07.357905 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g"] Feb 02 15:14:07 crc kubenswrapper[4869]: I0202 15:14:07.441451 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" event={"ID":"1cfd609a-5580-47a7-bb6d-afc564ca64d4","Type":"ContainerStarted","Data":"6cae2a815e4d258fb152f3c130db09c9f71494f2b17ad3fd0ad5350edc8cab28"} Feb 02 15:14:08 crc kubenswrapper[4869]: I0202 15:14:08.460259 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" event={"ID":"1cfd609a-5580-47a7-bb6d-afc564ca64d4","Type":"ContainerStarted","Data":"8a0a8792cdc68cd74560abc9c9fdc7ede2e8dec06d4c5bfe6331c1a371e82428"} Feb 02 15:14:08 crc kubenswrapper[4869]: I0202 15:14:08.498619 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" podStartSLOduration=2.088591687 podStartE2EDuration="2.49859959s" podCreationTimestamp="2026-02-02 15:14:06 +0000 UTC" firstStartedPulling="2026-02-02 15:14:07.379258162 +0000 UTC m=+2449.023894932" lastFinishedPulling="2026-02-02 15:14:07.789266065 +0000 UTC m=+2449.433902835" observedRunningTime="2026-02-02 15:14:08.495103615 +0000 UTC m=+2450.139740455" watchObservedRunningTime="2026-02-02 15:14:08.49859959 +0000 UTC m=+2450.143236360" Feb 02 15:14:14 crc kubenswrapper[4869]: I0202 15:14:14.463786 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:14 crc kubenswrapper[4869]: E0202 15:14:14.464652 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:14:25 crc kubenswrapper[4869]: I0202 15:14:25.463088 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:25 crc kubenswrapper[4869]: E0202 15:14:25.464475 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:14:38 crc kubenswrapper[4869]: I0202 15:14:38.750327 4869 generic.go:334] "Generic (PLEG): container finished" podID="1cfd609a-5580-47a7-bb6d-afc564ca64d4" containerID="8a0a8792cdc68cd74560abc9c9fdc7ede2e8dec06d4c5bfe6331c1a371e82428" exitCode=0 Feb 02 15:14:38 crc kubenswrapper[4869]: I0202 15:14:38.750409 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" event={"ID":"1cfd609a-5580-47a7-bb6d-afc564ca64d4","Type":"ContainerDied","Data":"8a0a8792cdc68cd74560abc9c9fdc7ede2e8dec06d4c5bfe6331c1a371e82428"} Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.226474 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372299 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372860 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372957 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373008 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373085 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373147 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373195 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373279 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.379811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.379934 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.381565 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.381562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.382212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.382261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.382426 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.382606 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.383889 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2" (OuterVolumeSpecName: "kube-api-access-9w4h2") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "kube-api-access-9w4h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.385214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.386835 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph" (OuterVolumeSpecName: "ceph") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.416448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.418490 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory" (OuterVolumeSpecName: "inventory") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.462725 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:40 crc kubenswrapper[4869]: E0202 15:14:40.462997 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.475894 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.475968 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.475984 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.475999 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476013 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476027 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476043 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476056 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476067 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476079 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476090 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476101 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476111 4869 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.771027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" event={"ID":"1cfd609a-5580-47a7-bb6d-afc564ca64d4","Type":"ContainerDied","Data":"6cae2a815e4d258fb152f3c130db09c9f71494f2b17ad3fd0ad5350edc8cab28"} Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.771074 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cae2a815e4d258fb152f3c130db09c9f71494f2b17ad3fd0ad5350edc8cab28" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.771122 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.940826 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r"] Feb 02 15:14:40 crc kubenswrapper[4869]: E0202 15:14:40.941881 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfd609a-5580-47a7-bb6d-afc564ca64d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.942026 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfd609a-5580-47a7-bb6d-afc564ca64d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.942308 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfd609a-5580-47a7-bb6d-afc564ca64d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.943198 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.947278 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.947643 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.947744 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.948287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.948546 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.965372 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r"] Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.090872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.091269 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.091546 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.091631 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.193548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.193626 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.193681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.193833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.203266 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.204567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.207978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.224972 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.271214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.786362 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r"] Feb 02 15:14:42 crc kubenswrapper[4869]: I0202 15:14:42.789381 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" event={"ID":"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d","Type":"ContainerStarted","Data":"67ab939e61080d26360214528db25bd4d74ad68a7acfb34933b81476a785f9c5"} Feb 02 15:14:42 crc kubenswrapper[4869]: I0202 15:14:42.789436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" event={"ID":"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d","Type":"ContainerStarted","Data":"13253c94f81bfeddbdc2d05dd9ed224b396ab5bf978bd268c048992fa8ab6e1d"} Feb 02 15:14:42 crc kubenswrapper[4869]: I0202 15:14:42.812265 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" podStartSLOduration=2.400760352 podStartE2EDuration="2.812218003s" podCreationTimestamp="2026-02-02 15:14:40 +0000 UTC" firstStartedPulling="2026-02-02 15:14:41.796411543 +0000 UTC m=+2483.441048323" lastFinishedPulling="2026-02-02 15:14:42.207869204 +0000 UTC m=+2483.852505974" observedRunningTime="2026-02-02 15:14:42.803734945 +0000 UTC m=+2484.448371725" watchObservedRunningTime="2026-02-02 15:14:42.812218003 +0000 UTC m=+2484.456854783" Feb 02 15:14:47 crc kubenswrapper[4869]: I0202 15:14:47.839076 4869 generic.go:334] "Generic (PLEG): container finished" podID="89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" containerID="67ab939e61080d26360214528db25bd4d74ad68a7acfb34933b81476a785f9c5" exitCode=0 Feb 02 15:14:47 crc kubenswrapper[4869]: I0202 15:14:47.839157 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" event={"ID":"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d","Type":"ContainerDied","Data":"67ab939e61080d26360214528db25bd4d74ad68a7acfb34933b81476a785f9c5"} Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.294555 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.390511 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") pod \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.390783 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") pod \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.391572 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") pod \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.391672 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") pod \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.396548 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf" (OuterVolumeSpecName: "kube-api-access-qfxsf") pod "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" (UID: "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d"). InnerVolumeSpecName "kube-api-access-qfxsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.396862 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph" (OuterVolumeSpecName: "ceph") pod "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" (UID: "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.423020 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" (UID: "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.429363 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory" (OuterVolumeSpecName: "inventory") pod "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" (UID: "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.494284 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.494321 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.494333 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.494343 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.861709 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" event={"ID":"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d","Type":"ContainerDied","Data":"13253c94f81bfeddbdc2d05dd9ed224b396ab5bf978bd268c048992fa8ab6e1d"} Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.861751 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13253c94f81bfeddbdc2d05dd9ed224b396ab5bf978bd268c048992fa8ab6e1d" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.861807 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.952783 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r"] Feb 02 15:14:49 crc kubenswrapper[4869]: E0202 15:14:49.954705 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.954736 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.955011 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.956253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.959175 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.960284 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.961332 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.961712 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.962461 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.963050 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.968454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r"] Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.105502 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.105853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.105907 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.105951 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.106041 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.106083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208373 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208475 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208525 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208647 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.209960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.212221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.212508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.213745 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.219336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.226572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.294821 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.878804 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r"] Feb 02 15:14:51 crc kubenswrapper[4869]: I0202 15:14:51.884972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" event={"ID":"72dccf63-f84a-41bb-a601-d67db9557b64","Type":"ContainerStarted","Data":"cb482c559ab444f53af2ecfd711fbbc076264bbf3a03007a004bb5a9a70007ec"} Feb 02 15:14:51 crc kubenswrapper[4869]: I0202 15:14:51.885434 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" event={"ID":"72dccf63-f84a-41bb-a601-d67db9557b64","Type":"ContainerStarted","Data":"7abc890e08cd800cf1fb6fe7ea6576ca4b4aef2758ae10e37bf78f1a50af7996"} Feb 02 15:14:51 crc kubenswrapper[4869]: I0202 15:14:51.918133 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" podStartSLOduration=2.445159325 podStartE2EDuration="2.918112273s" podCreationTimestamp="2026-02-02 15:14:49 +0000 UTC" firstStartedPulling="2026-02-02 15:14:50.88672028 +0000 UTC m=+2492.531357050" lastFinishedPulling="2026-02-02 15:14:51.359673228 +0000 UTC m=+2493.004309998" observedRunningTime="2026-02-02 15:14:51.909966513 +0000 UTC m=+2493.554603283" watchObservedRunningTime="2026-02-02 15:14:51.918112273 +0000 UTC m=+2493.562749043" Feb 02 15:14:54 crc kubenswrapper[4869]: I0202 15:14:54.463308 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:54 crc kubenswrapper[4869]: E0202 15:14:54.464332 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.150768 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.152756 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.155173 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.155335 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.172848 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.216453 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.216521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.216563 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.318807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.318861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.318890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.320181 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.332854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.340522 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.478335 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.993047 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 15:15:01 crc kubenswrapper[4869]: I0202 15:15:01.997790 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" containerID="1ee657e7e391fb0be0a60133a3c2bc04a0767f387cf6cc279ee259f05131226f" exitCode=0 Feb 02 15:15:01 crc kubenswrapper[4869]: I0202 15:15:01.998248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" event={"ID":"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c","Type":"ContainerDied","Data":"1ee657e7e391fb0be0a60133a3c2bc04a0767f387cf6cc279ee259f05131226f"} Feb 02 15:15:01 crc kubenswrapper[4869]: I0202 15:15:01.998282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" event={"ID":"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c","Type":"ContainerStarted","Data":"82117ee2800615f38cf817041582a17d2015e04778d10023edf8baf4eeab0a02"} Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.396394 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.506982 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") pod \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.507086 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") pod \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.507259 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") pod \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.508131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" (UID: "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.515060 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz" (OuterVolumeSpecName: "kube-api-access-h54pz") pod "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" (UID: "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c"). InnerVolumeSpecName "kube-api-access-h54pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.515798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" (UID: "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.609407 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.609472 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.609485 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") on node \"crc\" DevicePath \"\"" Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.017445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" event={"ID":"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c","Type":"ContainerDied","Data":"82117ee2800615f38cf817041582a17d2015e04778d10023edf8baf4eeab0a02"} Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.017486 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82117ee2800615f38cf817041582a17d2015e04778d10023edf8baf4eeab0a02" Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.017555 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.500262 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.509129 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 15:15:05 crc kubenswrapper[4869]: I0202 15:15:05.472090 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" path="/var/lib/kubelet/pods/ab9815bf-1049-47c8-8eda-cf2602f2eb83/volumes" Feb 02 15:15:07 crc kubenswrapper[4869]: I0202 15:15:07.463188 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:07 crc kubenswrapper[4869]: E0202 15:15:07.463698 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:19 crc kubenswrapper[4869]: I0202 15:15:19.475047 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:19 crc kubenswrapper[4869]: E0202 15:15:19.476310 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:30 crc kubenswrapper[4869]: I0202 15:15:30.467195 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:30 crc kubenswrapper[4869]: E0202 15:15:30.467950 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:30 crc kubenswrapper[4869]: I0202 15:15:30.530009 4869 scope.go:117] "RemoveContainer" containerID="e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f" Feb 02 15:15:43 crc kubenswrapper[4869]: I0202 15:15:43.463439 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:43 crc kubenswrapper[4869]: E0202 15:15:43.464885 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:55 crc kubenswrapper[4869]: I0202 15:15:55.463449 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:56 crc kubenswrapper[4869]: I0202 15:15:56.020375 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff"} Feb 02 15:16:01 crc kubenswrapper[4869]: I0202 15:16:01.069987 4869 generic.go:334] "Generic (PLEG): container finished" podID="72dccf63-f84a-41bb-a601-d67db9557b64" containerID="cb482c559ab444f53af2ecfd711fbbc076264bbf3a03007a004bb5a9a70007ec" exitCode=0 Feb 02 15:16:01 crc kubenswrapper[4869]: I0202 15:16:01.070074 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" event={"ID":"72dccf63-f84a-41bb-a601-d67db9557b64","Type":"ContainerDied","Data":"cb482c559ab444f53af2ecfd711fbbc076264bbf3a03007a004bb5a9a70007ec"} Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.491189 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.569994 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570582 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570659 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570733 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.580847 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.595106 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph" (OuterVolumeSpecName: "ceph") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.595214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2" (OuterVolumeSpecName: "kube-api-access-jhkw2") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "kube-api-access-jhkw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.600308 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.601302 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory" (OuterVolumeSpecName: "inventory") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.601622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673548 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673596 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673616 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673635 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673653 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673671 4869 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.092806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" event={"ID":"72dccf63-f84a-41bb-a601-d67db9557b64","Type":"ContainerDied","Data":"7abc890e08cd800cf1fb6fe7ea6576ca4b4aef2758ae10e37bf78f1a50af7996"} Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.092852 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7abc890e08cd800cf1fb6fe7ea6576ca4b4aef2758ae10e37bf78f1a50af7996" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.092977 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.189418 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g"] Feb 02 15:16:03 crc kubenswrapper[4869]: E0202 15:16:03.190156 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" containerName="collect-profiles" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.190188 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" containerName="collect-profiles" Feb 02 15:16:03 crc kubenswrapper[4869]: E0202 15:16:03.190234 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dccf63-f84a-41bb-a601-d67db9557b64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.190252 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dccf63-f84a-41bb-a601-d67db9557b64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.190629 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" containerName="collect-profiles" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.190670 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dccf63-f84a-41bb-a601-d67db9557b64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.191720 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.195363 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.195463 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.195842 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.198624 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.198688 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.201215 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.201260 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.204469 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g"] Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287584 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287662 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287698 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287839 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287900 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389349 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.393890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.394700 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.397611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.397611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.399439 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.402295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.407960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.517047 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:04 crc kubenswrapper[4869]: I0202 15:16:04.195491 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g"] Feb 02 15:16:04 crc kubenswrapper[4869]: I0202 15:16:04.196864 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:16:05 crc kubenswrapper[4869]: I0202 15:16:05.112002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" event={"ID":"cece8f41-7b97-43d1-b538-c09300006b15","Type":"ContainerStarted","Data":"77227ab15c4e6f6027db0220f21c3ecbc1457b11d5434d1902eaae9f95ef32c9"} Feb 02 15:16:05 crc kubenswrapper[4869]: I0202 15:16:05.112492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" event={"ID":"cece8f41-7b97-43d1-b538-c09300006b15","Type":"ContainerStarted","Data":"8d2984ec464ca86ed83beaded68c0b4de1fd280a2ba9f1825707b547eb063f6f"} Feb 02 15:16:05 crc kubenswrapper[4869]: I0202 15:16:05.146837 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" podStartSLOduration=1.670433555 podStartE2EDuration="2.146776336s" podCreationTimestamp="2026-02-02 15:16:03 +0000 UTC" firstStartedPulling="2026-02-02 15:16:04.196461482 +0000 UTC m=+2565.841098292" lastFinishedPulling="2026-02-02 15:16:04.672804263 +0000 UTC m=+2566.317441073" observedRunningTime="2026-02-02 15:16:05.135246224 +0000 UTC m=+2566.779883024" watchObservedRunningTime="2026-02-02 15:16:05.146776336 +0000 UTC m=+2566.791413146" Feb 02 15:17:00 crc kubenswrapper[4869]: I0202 15:17:00.697899 4869 generic.go:334] "Generic (PLEG): container finished" podID="cece8f41-7b97-43d1-b538-c09300006b15" containerID="77227ab15c4e6f6027db0220f21c3ecbc1457b11d5434d1902eaae9f95ef32c9" exitCode=0 Feb 02 15:17:00 crc kubenswrapper[4869]: I0202 15:17:00.698000 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" event={"ID":"cece8f41-7b97-43d1-b538-c09300006b15","Type":"ContainerDied","Data":"77227ab15c4e6f6027db0220f21c3ecbc1457b11d5434d1902eaae9f95ef32c9"} Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.187462 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224258 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224576 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224609 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224671 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224698 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.233087 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph" (OuterVolumeSpecName: "ceph") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.233114 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf" (OuterVolumeSpecName: "kube-api-access-srbhf") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "kube-api-access-srbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.236420 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.257578 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory" (OuterVolumeSpecName: "inventory") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.258482 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.259341 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.280787 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326748 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326781 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326792 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326801 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326810 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326819 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326828 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.719541 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" event={"ID":"cece8f41-7b97-43d1-b538-c09300006b15","Type":"ContainerDied","Data":"8d2984ec464ca86ed83beaded68c0b4de1fd280a2ba9f1825707b547eb063f6f"} Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.719598 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2984ec464ca86ed83beaded68c0b4de1fd280a2ba9f1825707b547eb063f6f" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.719655 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.812325 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9"] Feb 02 15:17:02 crc kubenswrapper[4869]: E0202 15:17:02.813074 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cece8f41-7b97-43d1-b538-c09300006b15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.813101 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cece8f41-7b97-43d1-b538-c09300006b15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.813350 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cece8f41-7b97-43d1-b538-c09300006b15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.814059 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.818753 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.818783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.818821 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.818926 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.819418 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.819557 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.829763 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9"] Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837274 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837323 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939826 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939869 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939928 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.944377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.944378 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.951503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.952024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.952607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.958250 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:03 crc kubenswrapper[4869]: I0202 15:17:03.140618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:03 crc kubenswrapper[4869]: I0202 15:17:03.671439 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9"] Feb 02 15:17:03 crc kubenswrapper[4869]: W0202 15:17:03.677324 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c45a4e_9fe0_4d8d_a74d_162a45a36d5e.slice/crio-a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f WatchSource:0}: Error finding container a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f: Status 404 returned error can't find the container with id a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f Feb 02 15:17:03 crc kubenswrapper[4869]: I0202 15:17:03.728536 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" event={"ID":"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e","Type":"ContainerStarted","Data":"a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f"} Feb 02 15:17:04 crc kubenswrapper[4869]: I0202 15:17:04.737715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" event={"ID":"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e","Type":"ContainerStarted","Data":"a1bcc83de6c8c3d6d8f0d46b65b7aea3a466ecc90ab2e07ea6784ad03b72f134"} Feb 02 15:17:04 crc kubenswrapper[4869]: I0202 15:17:04.755455 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" podStartSLOduration=2.330080433 podStartE2EDuration="2.755437914s" podCreationTimestamp="2026-02-02 15:17:02 +0000 UTC" firstStartedPulling="2026-02-02 15:17:03.680300239 +0000 UTC m=+2625.324937009" lastFinishedPulling="2026-02-02 15:17:04.1056577 +0000 UTC m=+2625.750294490" observedRunningTime="2026-02-02 15:17:04.753819725 +0000 UTC m=+2626.398456545" watchObservedRunningTime="2026-02-02 15:17:04.755437914 +0000 UTC m=+2626.400074684" Feb 02 15:17:11 crc kubenswrapper[4869]: I0202 15:17:11.948893 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:11 crc kubenswrapper[4869]: I0202 15:17:11.952028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:11 crc kubenswrapper[4869]: I0202 15:17:11.993438 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.023577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.023738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.023815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.125562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.125639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.125661 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.126167 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.126311 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.143859 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.279586 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.802019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:13 crc kubenswrapper[4869]: I0202 15:17:13.830708 4869 generic.go:334] "Generic (PLEG): container finished" podID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerID="c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea" exitCode=0 Feb 02 15:17:13 crc kubenswrapper[4869]: I0202 15:17:13.830841 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerDied","Data":"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea"} Feb 02 15:17:13 crc kubenswrapper[4869]: I0202 15:17:13.831816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerStarted","Data":"935e53aa74a8a25a69dce794297ca87892c29b09030ee86052fff3f55b981f1f"} Feb 02 15:17:15 crc kubenswrapper[4869]: I0202 15:17:15.853269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerStarted","Data":"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945"} Feb 02 15:17:16 crc kubenswrapper[4869]: I0202 15:17:16.865599 4869 generic.go:334] "Generic (PLEG): container finished" podID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerID="77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945" exitCode=0 Feb 02 15:17:16 crc kubenswrapper[4869]: I0202 15:17:16.865649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerDied","Data":"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945"} Feb 02 15:17:17 crc kubenswrapper[4869]: I0202 15:17:17.886731 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerStarted","Data":"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130"} Feb 02 15:17:17 crc kubenswrapper[4869]: I0202 15:17:17.911857 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-88wj9" podStartSLOduration=3.457839542 podStartE2EDuration="6.911835023s" podCreationTimestamp="2026-02-02 15:17:11 +0000 UTC" firstStartedPulling="2026-02-02 15:17:13.833198884 +0000 UTC m=+2635.477835654" lastFinishedPulling="2026-02-02 15:17:17.287194365 +0000 UTC m=+2638.931831135" observedRunningTime="2026-02-02 15:17:17.909532776 +0000 UTC m=+2639.554169546" watchObservedRunningTime="2026-02-02 15:17:17.911835023 +0000 UTC m=+2639.556471803" Feb 02 15:17:22 crc kubenswrapper[4869]: I0202 15:17:22.280152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:22 crc kubenswrapper[4869]: I0202 15:17:22.280825 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:23 crc kubenswrapper[4869]: I0202 15:17:23.341535 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-88wj9" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" probeResult="failure" output=< Feb 02 15:17:23 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:17:23 crc kubenswrapper[4869]: > Feb 02 15:17:32 crc kubenswrapper[4869]: I0202 15:17:32.347728 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:32 crc kubenswrapper[4869]: I0202 15:17:32.432554 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:32 crc kubenswrapper[4869]: I0202 15:17:32.595232 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.054535 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-88wj9" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" containerID="cri-o://07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" gracePeriod=2 Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.541875 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.726077 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") pod \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.726226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") pod \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.726322 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") pod \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.727625 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities" (OuterVolumeSpecName: "utilities") pod "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" (UID: "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.734075 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr" (OuterVolumeSpecName: "kube-api-access-5qnpr") pod "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" (UID: "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2"). InnerVolumeSpecName "kube-api-access-5qnpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.830364 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.830479 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.862291 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" (UID: "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.932814 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.068143 4869 generic.go:334] "Generic (PLEG): container finished" podID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerID="07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" exitCode=0 Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.068212 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.068251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerDied","Data":"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130"} Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.068945 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerDied","Data":"935e53aa74a8a25a69dce794297ca87892c29b09030ee86052fff3f55b981f1f"} Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.069000 4869 scope.go:117] "RemoveContainer" containerID="07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.093357 4869 scope.go:117] "RemoveContainer" containerID="77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.119003 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.121851 4869 scope.go:117] "RemoveContainer" containerID="c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.135694 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.175311 4869 scope.go:117] "RemoveContainer" containerID="07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" Feb 02 15:17:35 crc kubenswrapper[4869]: E0202 15:17:35.176140 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130\": container with ID starting with 07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130 not found: ID does not exist" containerID="07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.176190 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130"} err="failed to get container status \"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130\": rpc error: code = NotFound desc = could not find container \"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130\": container with ID starting with 07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130 not found: ID does not exist" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.176220 4869 scope.go:117] "RemoveContainer" containerID="77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945" Feb 02 15:17:35 crc kubenswrapper[4869]: E0202 15:17:35.176855 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945\": container with ID starting with 77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945 not found: ID does not exist" containerID="77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.176917 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945"} err="failed to get container status \"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945\": rpc error: code = NotFound desc = could not find container \"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945\": container with ID starting with 77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945 not found: ID does not exist" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.177072 4869 scope.go:117] "RemoveContainer" containerID="c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea" Feb 02 15:17:35 crc kubenswrapper[4869]: E0202 15:17:35.177515 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea\": container with ID starting with c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea not found: ID does not exist" containerID="c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.177552 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea"} err="failed to get container status \"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea\": rpc error: code = NotFound desc = could not find container \"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea\": container with ID starting with c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea not found: ID does not exist" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.482306 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" path="/var/lib/kubelet/pods/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2/volumes" Feb 02 15:18:15 crc kubenswrapper[4869]: I0202 15:18:15.304312 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:18:15 crc kubenswrapper[4869]: I0202 15:18:15.304983 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:18:45 crc kubenswrapper[4869]: I0202 15:18:45.304785 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:18:45 crc kubenswrapper[4869]: I0202 15:18:45.305499 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.304279 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.304876 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.305003 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.305949 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.306049 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff" gracePeriod=600 Feb 02 15:19:16 crc kubenswrapper[4869]: I0202 15:19:16.106000 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff" exitCode=0 Feb 02 15:19:16 crc kubenswrapper[4869]: I0202 15:19:16.106087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff"} Feb 02 15:19:16 crc kubenswrapper[4869]: I0202 15:19:16.106383 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb"} Feb 02 15:19:16 crc kubenswrapper[4869]: I0202 15:19:16.106412 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.928307 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:20:49 crc kubenswrapper[4869]: E0202 15:20:49.929206 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="extract-utilities" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.929221 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="extract-utilities" Feb 02 15:20:49 crc kubenswrapper[4869]: E0202 15:20:49.929239 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="extract-content" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.929247 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="extract-content" Feb 02 15:20:49 crc kubenswrapper[4869]: E0202 15:20:49.929262 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.929271 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.929511 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.931372 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.953901 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.029663 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.029771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.029851 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.131513 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.131642 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.131786 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.132079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.132401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.156182 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.284524 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.782014 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:20:51 crc kubenswrapper[4869]: I0202 15:20:51.083407 4869 generic.go:334] "Generic (PLEG): container finished" podID="5d60644a-3c45-4853-b628-4e9517c65940" containerID="f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72" exitCode=0 Feb 02 15:20:51 crc kubenswrapper[4869]: I0202 15:20:51.083446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerDied","Data":"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72"} Feb 02 15:20:51 crc kubenswrapper[4869]: I0202 15:20:51.083470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerStarted","Data":"5f377289cdedfb216d3a3b90c052283a8e116cbfd4faa6e26b39c99e0747b88e"} Feb 02 15:20:52 crc kubenswrapper[4869]: I0202 15:20:52.100502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerStarted","Data":"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00"} Feb 02 15:20:53 crc kubenswrapper[4869]: I0202 15:20:53.112217 4869 generic.go:334] "Generic (PLEG): container finished" podID="5d60644a-3c45-4853-b628-4e9517c65940" containerID="f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00" exitCode=0 Feb 02 15:20:53 crc kubenswrapper[4869]: I0202 15:20:53.112326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerDied","Data":"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00"} Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.122794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerStarted","Data":"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b"} Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.147999 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2fvl2" podStartSLOduration=2.7270488090000002 podStartE2EDuration="5.147966853s" podCreationTimestamp="2026-02-02 15:20:49 +0000 UTC" firstStartedPulling="2026-02-02 15:20:51.085618837 +0000 UTC m=+2852.730255647" lastFinishedPulling="2026-02-02 15:20:53.506536921 +0000 UTC m=+2855.151173691" observedRunningTime="2026-02-02 15:20:54.145445322 +0000 UTC m=+2855.790082092" watchObservedRunningTime="2026-02-02 15:20:54.147966853 +0000 UTC m=+2855.792603673" Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.899365 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.901823 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.919048 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.021820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.021917 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.021977 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.123827 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.124041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.124151 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.124628 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.124859 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.146153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.224732 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.755850 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:20:55 crc kubenswrapper[4869]: W0202 15:20:55.766517 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3add0bf_cfd3_4829_bfb6_e72ca53eab05.slice/crio-422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d WatchSource:0}: Error finding container 422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d: Status 404 returned error can't find the container with id 422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d Feb 02 15:20:56 crc kubenswrapper[4869]: I0202 15:20:56.141315 4869 generic.go:334] "Generic (PLEG): container finished" podID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerID="af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8" exitCode=0 Feb 02 15:20:56 crc kubenswrapper[4869]: I0202 15:20:56.141690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerDied","Data":"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8"} Feb 02 15:20:56 crc kubenswrapper[4869]: I0202 15:20:56.141726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerStarted","Data":"422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d"} Feb 02 15:20:57 crc kubenswrapper[4869]: I0202 15:20:57.157215 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerStarted","Data":"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04"} Feb 02 15:20:58 crc kubenswrapper[4869]: I0202 15:20:58.178802 4869 generic.go:334] "Generic (PLEG): container finished" podID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerID="55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04" exitCode=0 Feb 02 15:20:58 crc kubenswrapper[4869]: I0202 15:20:58.178848 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerDied","Data":"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04"} Feb 02 15:20:59 crc kubenswrapper[4869]: I0202 15:20:59.191411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerStarted","Data":"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c"} Feb 02 15:20:59 crc kubenswrapper[4869]: I0202 15:20:59.217216 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4g924" podStartSLOduration=2.747626486 podStartE2EDuration="5.217191282s" podCreationTimestamp="2026-02-02 15:20:54 +0000 UTC" firstStartedPulling="2026-02-02 15:20:56.143488336 +0000 UTC m=+2857.788125096" lastFinishedPulling="2026-02-02 15:20:58.613053082 +0000 UTC m=+2860.257689892" observedRunningTime="2026-02-02 15:20:59.211079863 +0000 UTC m=+2860.855716633" watchObservedRunningTime="2026-02-02 15:20:59.217191282 +0000 UTC m=+2860.861828072" Feb 02 15:21:00 crc kubenswrapper[4869]: I0202 15:21:00.285133 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:00 crc kubenswrapper[4869]: I0202 15:21:00.285208 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:00 crc kubenswrapper[4869]: I0202 15:21:00.339695 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:01 crc kubenswrapper[4869]: I0202 15:21:01.261213 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:01 crc kubenswrapper[4869]: I0202 15:21:01.895000 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.226296 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2fvl2" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="registry-server" containerID="cri-o://eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" gracePeriod=2 Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.742018 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.792432 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") pod \"5d60644a-3c45-4853-b628-4e9517c65940\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.792618 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") pod \"5d60644a-3c45-4853-b628-4e9517c65940\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.792666 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") pod \"5d60644a-3c45-4853-b628-4e9517c65940\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.793562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities" (OuterVolumeSpecName: "utilities") pod "5d60644a-3c45-4853-b628-4e9517c65940" (UID: "5d60644a-3c45-4853-b628-4e9517c65940"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.801903 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c" (OuterVolumeSpecName: "kube-api-access-x7c7c") pod "5d60644a-3c45-4853-b628-4e9517c65940" (UID: "5d60644a-3c45-4853-b628-4e9517c65940"). InnerVolumeSpecName "kube-api-access-x7c7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.858690 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d60644a-3c45-4853-b628-4e9517c65940" (UID: "5d60644a-3c45-4853-b628-4e9517c65940"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.894491 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.894546 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.894566 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240233 4869 generic.go:334] "Generic (PLEG): container finished" podID="5d60644a-3c45-4853-b628-4e9517c65940" containerID="eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" exitCode=0 Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240316 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerDied","Data":"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b"} Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240383 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerDied","Data":"5f377289cdedfb216d3a3b90c052283a8e116cbfd4faa6e26b39c99e0747b88e"} Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240425 4869 scope.go:117] "RemoveContainer" containerID="eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240427 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.267424 4869 scope.go:117] "RemoveContainer" containerID="f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.301209 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.310948 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.321665 4869 scope.go:117] "RemoveContainer" containerID="f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.353971 4869 scope.go:117] "RemoveContainer" containerID="eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" Feb 02 15:21:04 crc kubenswrapper[4869]: E0202 15:21:04.354797 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b\": container with ID starting with eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b not found: ID does not exist" containerID="eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.354832 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b"} err="failed to get container status \"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b\": rpc error: code = NotFound desc = could not find container \"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b\": container with ID starting with eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b not found: ID does not exist" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.354851 4869 scope.go:117] "RemoveContainer" containerID="f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00" Feb 02 15:21:04 crc kubenswrapper[4869]: E0202 15:21:04.355788 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00\": container with ID starting with f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00 not found: ID does not exist" containerID="f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.355815 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00"} err="failed to get container status \"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00\": rpc error: code = NotFound desc = could not find container \"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00\": container with ID starting with f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00 not found: ID does not exist" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.355828 4869 scope.go:117] "RemoveContainer" containerID="f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72" Feb 02 15:21:04 crc kubenswrapper[4869]: E0202 15:21:04.356320 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72\": container with ID starting with f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72 not found: ID does not exist" containerID="f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.356345 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72"} err="failed to get container status \"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72\": rpc error: code = NotFound desc = could not find container \"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72\": container with ID starting with f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72 not found: ID does not exist" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.226795 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.227436 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.317768 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.405717 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.480862 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d60644a-3c45-4853-b628-4e9517c65940" path="/var/lib/kubelet/pods/5d60644a-3c45-4853-b628-4e9517c65940/volumes" Feb 02 15:21:07 crc kubenswrapper[4869]: I0202 15:21:07.699396 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:21:07 crc kubenswrapper[4869]: I0202 15:21:07.699971 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4g924" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="registry-server" containerID="cri-o://a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" gracePeriod=2 Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.167375 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282130 4869 generic.go:334] "Generic (PLEG): container finished" podID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerID="a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" exitCode=0 Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282177 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerDied","Data":"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c"} Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerDied","Data":"422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d"} Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282227 4869 scope.go:117] "RemoveContainer" containerID="a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.283440 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") pod \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.283585 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") pod \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.283659 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") pod \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.284672 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities" (OuterVolumeSpecName: "utilities") pod "b3add0bf-cfd3-4829-bfb6-e72ca53eab05" (UID: "b3add0bf-cfd3-4829-bfb6-e72ca53eab05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.288738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9" (OuterVolumeSpecName: "kube-api-access-6g7p9") pod "b3add0bf-cfd3-4829-bfb6-e72ca53eab05" (UID: "b3add0bf-cfd3-4829-bfb6-e72ca53eab05"). InnerVolumeSpecName "kube-api-access-6g7p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.337448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3add0bf-cfd3-4829-bfb6-e72ca53eab05" (UID: "b3add0bf-cfd3-4829-bfb6-e72ca53eab05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.354780 4869 scope.go:117] "RemoveContainer" containerID="55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.372638 4869 scope.go:117] "RemoveContainer" containerID="af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.386407 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.386442 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.386453 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.407703 4869 scope.go:117] "RemoveContainer" containerID="a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" Feb 02 15:21:08 crc kubenswrapper[4869]: E0202 15:21:08.408222 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c\": container with ID starting with a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c not found: ID does not exist" containerID="a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.408274 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c"} err="failed to get container status \"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c\": rpc error: code = NotFound desc = could not find container \"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c\": container with ID starting with a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c not found: ID does not exist" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.408300 4869 scope.go:117] "RemoveContainer" containerID="55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04" Feb 02 15:21:08 crc kubenswrapper[4869]: E0202 15:21:08.408715 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04\": container with ID starting with 55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04 not found: ID does not exist" containerID="55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.408743 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04"} err="failed to get container status \"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04\": rpc error: code = NotFound desc = could not find container \"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04\": container with ID starting with 55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04 not found: ID does not exist" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.408762 4869 scope.go:117] "RemoveContainer" containerID="af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8" Feb 02 15:21:08 crc kubenswrapper[4869]: E0202 15:21:08.409083 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8\": container with ID starting with af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8 not found: ID does not exist" containerID="af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.409102 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8"} err="failed to get container status \"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8\": rpc error: code = NotFound desc = could not find container \"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8\": container with ID starting with af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8 not found: ID does not exist" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.620483 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.667647 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:21:09 crc kubenswrapper[4869]: I0202 15:21:09.478106 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" path="/var/lib/kubelet/pods/b3add0bf-cfd3-4829-bfb6-e72ca53eab05/volumes" Feb 02 15:21:15 crc kubenswrapper[4869]: I0202 15:21:15.304614 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:21:15 crc kubenswrapper[4869]: I0202 15:21:15.305412 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:21:26 crc kubenswrapper[4869]: I0202 15:21:26.494563 4869 generic.go:334] "Generic (PLEG): container finished" podID="83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" containerID="a1bcc83de6c8c3d6d8f0d46b65b7aea3a466ecc90ab2e07ea6784ad03b72f134" exitCode=0 Feb 02 15:21:26 crc kubenswrapper[4869]: I0202 15:21:26.495230 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" event={"ID":"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e","Type":"ContainerDied","Data":"a1bcc83de6c8c3d6d8f0d46b65b7aea3a466ecc90ab2e07ea6784ad03b72f134"} Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.012620 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106615 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106736 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106762 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106803 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.113165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph" (OuterVolumeSpecName: "ceph") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.115156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.115164 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f" (OuterVolumeSpecName: "kube-api-access-9px9f") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "kube-api-access-9px9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.132843 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.139239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.145483 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory" (OuterVolumeSpecName: "inventory") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209391 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209444 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209467 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209487 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209508 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209528 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.515121 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" event={"ID":"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e","Type":"ContainerDied","Data":"a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f"} Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.515173 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.515207 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.656436 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk"] Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.656845 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.656869 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.656886 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="extract-utilities" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.656895 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="extract-utilities" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.656928 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="extract-utilities" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.656938 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="extract-utilities" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.656993 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657007 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.657034 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="extract-content" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657043 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="extract-content" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.657096 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657106 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.657127 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="extract-content" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657135 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="extract-content" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657511 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657548 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657568 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.658434 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.660721 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.661021 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662012 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662170 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662324 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662355 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662453 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662497 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662754 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.677105 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk"] Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829485 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829549 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829607 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829646 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829985 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.830073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.830147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.830190 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.830367 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.931969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932066 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932167 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932230 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932355 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932582 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.933309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.934066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.937348 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.939017 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.940868 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.941826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.946258 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.946314 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.947174 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.958870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.969295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.977431 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:29 crc kubenswrapper[4869]: I0202 15:21:29.559187 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk"] Feb 02 15:21:29 crc kubenswrapper[4869]: I0202 15:21:29.567408 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:21:30 crc kubenswrapper[4869]: I0202 15:21:30.535771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" event={"ID":"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e","Type":"ContainerStarted","Data":"5f5c174b338b5c46b501b4e35b795946f7906c1879c7b0cdc3ebf6b01cbaf2ff"} Feb 02 15:21:30 crc kubenswrapper[4869]: I0202 15:21:30.536145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" event={"ID":"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e","Type":"ContainerStarted","Data":"c9c943b42281a5f7a9cffcb44ae79a79b00120da70449b3e3ad985f6375d8b56"} Feb 02 15:21:30 crc kubenswrapper[4869]: I0202 15:21:30.559918 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" podStartSLOduration=2.111595725 podStartE2EDuration="2.55988109s" podCreationTimestamp="2026-02-02 15:21:28 +0000 UTC" firstStartedPulling="2026-02-02 15:21:29.567172598 +0000 UTC m=+2891.211809368" lastFinishedPulling="2026-02-02 15:21:30.015457953 +0000 UTC m=+2891.660094733" observedRunningTime="2026-02-02 15:21:30.553873993 +0000 UTC m=+2892.198510763" watchObservedRunningTime="2026-02-02 15:21:30.55988109 +0000 UTC m=+2892.204517860" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.248703 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.253274 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.272811 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.430861 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.431066 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.431364 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.533603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.533681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.533780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.534167 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.534218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.576252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.581955 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:43 crc kubenswrapper[4869]: I0202 15:21:43.097204 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:43 crc kubenswrapper[4869]: W0202 15:21:43.114788 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6464971e_d1e4_4e00_b758_17fb7448a055.slice/crio-2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e WatchSource:0}: Error finding container 2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e: Status 404 returned error can't find the container with id 2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e Feb 02 15:21:43 crc kubenswrapper[4869]: I0202 15:21:43.657841 4869 generic.go:334] "Generic (PLEG): container finished" podID="6464971e-d1e4-4e00-b758-17fb7448a055" containerID="0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f" exitCode=0 Feb 02 15:21:43 crc kubenswrapper[4869]: I0202 15:21:43.657956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerDied","Data":"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f"} Feb 02 15:21:43 crc kubenswrapper[4869]: I0202 15:21:43.658292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerStarted","Data":"2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e"} Feb 02 15:21:44 crc kubenswrapper[4869]: I0202 15:21:44.668271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerStarted","Data":"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733"} Feb 02 15:21:45 crc kubenswrapper[4869]: I0202 15:21:45.305103 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:21:45 crc kubenswrapper[4869]: I0202 15:21:45.305470 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:21:45 crc kubenswrapper[4869]: I0202 15:21:45.685588 4869 generic.go:334] "Generic (PLEG): container finished" podID="6464971e-d1e4-4e00-b758-17fb7448a055" containerID="e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733" exitCode=0 Feb 02 15:21:45 crc kubenswrapper[4869]: I0202 15:21:45.685669 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerDied","Data":"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733"} Feb 02 15:21:46 crc kubenswrapper[4869]: I0202 15:21:46.696830 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerStarted","Data":"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5"} Feb 02 15:21:46 crc kubenswrapper[4869]: I0202 15:21:46.723800 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ttqqd" podStartSLOduration=2.279986923 podStartE2EDuration="4.723773098s" podCreationTimestamp="2026-02-02 15:21:42 +0000 UTC" firstStartedPulling="2026-02-02 15:21:43.659713138 +0000 UTC m=+2905.304349938" lastFinishedPulling="2026-02-02 15:21:46.103499313 +0000 UTC m=+2907.748136113" observedRunningTime="2026-02-02 15:21:46.721285917 +0000 UTC m=+2908.365922727" watchObservedRunningTime="2026-02-02 15:21:46.723773098 +0000 UTC m=+2908.368409888" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.583043 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.583751 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.647670 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.808393 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.891298 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:54 crc kubenswrapper[4869]: I0202 15:21:54.780239 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ttqqd" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="registry-server" containerID="cri-o://aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" gracePeriod=2 Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.283606 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.397694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") pod \"6464971e-d1e4-4e00-b758-17fb7448a055\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.397889 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") pod \"6464971e-d1e4-4e00-b758-17fb7448a055\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.398110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") pod \"6464971e-d1e4-4e00-b758-17fb7448a055\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.399117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities" (OuterVolumeSpecName: "utilities") pod "6464971e-d1e4-4e00-b758-17fb7448a055" (UID: "6464971e-d1e4-4e00-b758-17fb7448a055"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.403362 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk" (OuterVolumeSpecName: "kube-api-access-wwnmk") pod "6464971e-d1e4-4e00-b758-17fb7448a055" (UID: "6464971e-d1e4-4e00-b758-17fb7448a055"). InnerVolumeSpecName "kube-api-access-wwnmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.428035 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6464971e-d1e4-4e00-b758-17fb7448a055" (UID: "6464971e-d1e4-4e00-b758-17fb7448a055"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.500379 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.500433 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.500456 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.795039 4869 generic.go:334] "Generic (PLEG): container finished" podID="6464971e-d1e4-4e00-b758-17fb7448a055" containerID="aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" exitCode=0 Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.795139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerDied","Data":"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5"} Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.795229 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerDied","Data":"2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e"} Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.795262 4869 scope.go:117] "RemoveContainer" containerID="aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.797048 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.832102 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.842546 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.842616 4869 scope.go:117] "RemoveContainer" containerID="e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.877316 4869 scope.go:117] "RemoveContainer" containerID="0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.941330 4869 scope.go:117] "RemoveContainer" containerID="aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" Feb 02 15:21:55 crc kubenswrapper[4869]: E0202 15:21:55.941780 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5\": container with ID starting with aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5 not found: ID does not exist" containerID="aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.941856 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5"} err="failed to get container status \"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5\": rpc error: code = NotFound desc = could not find container \"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5\": container with ID starting with aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5 not found: ID does not exist" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.941899 4869 scope.go:117] "RemoveContainer" containerID="e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733" Feb 02 15:21:55 crc kubenswrapper[4869]: E0202 15:21:55.946880 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733\": container with ID starting with e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733 not found: ID does not exist" containerID="e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.947080 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733"} err="failed to get container status \"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733\": rpc error: code = NotFound desc = could not find container \"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733\": container with ID starting with e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733 not found: ID does not exist" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.947126 4869 scope.go:117] "RemoveContainer" containerID="0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f" Feb 02 15:21:55 crc kubenswrapper[4869]: E0202 15:21:55.947677 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f\": container with ID starting with 0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f not found: ID does not exist" containerID="0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.947730 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f"} err="failed to get container status \"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f\": rpc error: code = NotFound desc = could not find container \"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f\": container with ID starting with 0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f not found: ID does not exist" Feb 02 15:21:57 crc kubenswrapper[4869]: I0202 15:21:57.473020 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" path="/var/lib/kubelet/pods/6464971e-d1e4-4e00-b758-17fb7448a055/volumes" Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.304006 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.306466 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.306752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.308159 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.308416 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" gracePeriod=600 Feb 02 15:22:15 crc kubenswrapper[4869]: E0202 15:22:15.433471 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:22:16 crc kubenswrapper[4869]: I0202 15:22:16.009161 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" exitCode=0 Feb 02 15:22:16 crc kubenswrapper[4869]: I0202 15:22:16.009231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb"} Feb 02 15:22:16 crc kubenswrapper[4869]: I0202 15:22:16.009281 4869 scope.go:117] "RemoveContainer" containerID="d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff" Feb 02 15:22:16 crc kubenswrapper[4869]: I0202 15:22:16.012795 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:22:16 crc kubenswrapper[4869]: E0202 15:22:16.013245 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:22:28 crc kubenswrapper[4869]: I0202 15:22:28.463507 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:22:28 crc kubenswrapper[4869]: E0202 15:22:28.464411 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:22:41 crc kubenswrapper[4869]: I0202 15:22:41.462476 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:22:41 crc kubenswrapper[4869]: E0202 15:22:41.463436 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:22:56 crc kubenswrapper[4869]: I0202 15:22:56.464161 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:22:56 crc kubenswrapper[4869]: E0202 15:22:56.465637 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:11 crc kubenswrapper[4869]: I0202 15:23:11.462451 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:23:11 crc kubenswrapper[4869]: E0202 15:23:11.463239 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:24 crc kubenswrapper[4869]: I0202 15:23:24.463176 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:23:24 crc kubenswrapper[4869]: E0202 15:23:24.464254 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:39 crc kubenswrapper[4869]: I0202 15:23:39.469004 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:23:39 crc kubenswrapper[4869]: E0202 15:23:39.469875 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:50 crc kubenswrapper[4869]: I0202 15:23:50.467366 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:23:50 crc kubenswrapper[4869]: E0202 15:23:50.469268 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:52 crc kubenswrapper[4869]: I0202 15:23:52.959108 4869 generic.go:334] "Generic (PLEG): container finished" podID="196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" containerID="5f5c174b338b5c46b501b4e35b795946f7906c1879c7b0cdc3ebf6b01cbaf2ff" exitCode=0 Feb 02 15:23:52 crc kubenswrapper[4869]: I0202 15:23:52.959225 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" event={"ID":"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e","Type":"ContainerDied","Data":"5f5c174b338b5c46b501b4e35b795946f7906c1879c7b0cdc3ebf6b01cbaf2ff"} Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.465221 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.550809 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.550862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.550936 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.550984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551170 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551206 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551264 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.564159 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds" (OuterVolumeSpecName: "kube-api-access-5g4ds") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "kube-api-access-5g4ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.571368 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph" (OuterVolumeSpecName: "ceph") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.577278 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.585557 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.590039 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.599795 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.601962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory" (OuterVolumeSpecName: "inventory") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.604575 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.605045 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.605569 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.615396 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653768 4869 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653813 4869 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653829 4869 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653843 4869 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653856 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653868 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653880 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653892 4869 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653903 4869 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653935 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653947 4869 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.981257 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" event={"ID":"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e","Type":"ContainerDied","Data":"c9c943b42281a5f7a9cffcb44ae79a79b00120da70449b3e3ad985f6375d8b56"} Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.981314 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c943b42281a5f7a9cffcb44ae79a79b00120da70449b3e3ad985f6375d8b56" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.981392 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:24:01 crc kubenswrapper[4869]: I0202 15:24:01.462789 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:01 crc kubenswrapper[4869]: E0202 15:24:01.463721 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928147 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 15:24:08 crc kubenswrapper[4869]: E0202 15:24:08.928803 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="registry-server" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928821 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="registry-server" Feb 02 15:24:08 crc kubenswrapper[4869]: E0202 15:24:08.928838 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="extract-utilities" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928844 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="extract-utilities" Feb 02 15:24:08 crc kubenswrapper[4869]: E0202 15:24:08.928864 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928871 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 15:24:08 crc kubenswrapper[4869]: E0202 15:24:08.928886 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="extract-content" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928891 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="extract-content" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.929094 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.929117 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="registry-server" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.930021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.932422 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.932990 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953009 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953027 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953095 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953118 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953251 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953296 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9gh\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-kube-api-access-6f9gh\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953351 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953560 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953651 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-run\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953860 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953921 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.016987 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.018687 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.025627 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.039479 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-scripts\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055212 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055240 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-dev\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055257 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-run\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055297 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-ceph\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055378 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks8h4\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-kube-api-access-ks8h4\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055402 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055417 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-sys\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055436 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055481 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055495 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055513 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9gh\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-kube-api-access-6f9gh\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055559 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-run\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055576 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055591 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055617 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055654 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055730 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055750 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055845 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-run\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.056098 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.056888 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057111 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057439 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057491 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057507 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.058605 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.061717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.062623 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.063209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.063713 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.076994 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.089278 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9gh\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-kube-api-access-6f9gh\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-run\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157407 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157451 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-scripts\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157469 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157494 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-dev\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-ceph\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157577 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks8h4\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-kube-api-access-ks8h4\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-sys\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157739 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157775 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-run\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157794 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158513 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-dev\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158972 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.159044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.159192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-sys\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.161820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.161829 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-scripts\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.162323 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-ceph\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.162389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.162859 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.178599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks8h4\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-kube-api-access-ks8h4\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.249942 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.341721 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.494371 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.496294 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.496405 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.529383 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.530688 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.535843 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.546336 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.548216 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.550306 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.550461 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.550541 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.550628 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7fldw" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.572793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.572858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.572951 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.572970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573000 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573015 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.600451 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.615055 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676610 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676682 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676741 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676826 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676866 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.677570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.678141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.678384 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.681251 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.681894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.683547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.693206 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.703628 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.706816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.723362 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.734504 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.741257 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.775968 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.777468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778649 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778785 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778806 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778847 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.783364 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q8bdk" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.783367 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.784080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.785480 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.811407 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.832340 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.870530 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880607 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.882078 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.882210 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.882723 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.884029 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.903252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.947469 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.947540 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.948858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.959782 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.960065 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.964552 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982135 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982212 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982240 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982256 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982278 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982376 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982412 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.016337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.037543 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085085 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085484 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085613 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085664 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085741 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085804 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.086563 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.089730 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.090111 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.091798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.093447 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.101824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.102834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.104136 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.122263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.133530 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.160301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37","Type":"ContainerStarted","Data":"413a71d83cad7dbb27c3eedc69feaa178bfef6776e9a6f53bd15629dd0ae3e78"} Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187246 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187306 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187323 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187378 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187411 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187432 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187468 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.190313 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.190668 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.192765 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.193605 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.195674 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.200230 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.200890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.214552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.242551 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.321238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.399243 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.515811 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.704634 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:24:10 crc kubenswrapper[4869]: W0202 15:24:10.715649 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b666475_dc9a_41e9_b087_b2042c2dd80f.slice/crio-0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a WatchSource:0}: Error finding container 0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a: Status 404 returned error can't find the container with id 0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.717414 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.734391 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.888152 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.968418 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:11 crc kubenswrapper[4869]: W0202 15:24:11.027368 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94981156_d105_463b_90e1_db9b2dbbb853.slice/crio-57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9 WatchSource:0}: Error finding container 57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9: Status 404 returned error can't find the container with id 57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9 Feb 02 15:24:11 crc kubenswrapper[4869]: W0202 15:24:11.028850 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffb18e2a_67e6_4932_97fb_dd57b66f6c93.slice/crio-94be20283077f482426506b2c97be1d382fa38575982bc195623e0a24412fb0d WatchSource:0}: Error finding container 94be20283077f482426506b2c97be1d382fa38575982bc195623e0a24412fb0d: Status 404 returned error can't find the container with id 94be20283077f482426506b2c97be1d382fa38575982bc195623e0a24412fb0d Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.174707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerStarted","Data":"1e3835ffee852cf7e2e461dbfd0c1bce873454f7dd01eb6e5bb8f0bd42308327"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.180189 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2vhkx" event={"ID":"5b666475-dc9a-41e9-b087-b2042c2dd80f","Type":"ContainerStarted","Data":"e2b3a08d13bb54ca12a353c801a13c65fca6c0e6e63916392001244a909d1156"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.180244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2vhkx" event={"ID":"5b666475-dc9a-41e9-b087-b2042c2dd80f","Type":"ContainerStarted","Data":"0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.184402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerStarted","Data":"57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.187836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerStarted","Data":"2db55e6d04f2819c1e06bcde8e721cfa825f9601f520cf4e3f6565c2aaa1d4aa"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.189170 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ffb18e2a-67e6-4932-97fb-dd57b66f6c93","Type":"ContainerStarted","Data":"94be20283077f482426506b2c97be1d382fa38575982bc195623e0a24412fb0d"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.195300 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" containerID="f6a65d674c18b4d91e1a4a5378741c663bb46842c68ee5b840ab49a144aef022" exitCode=0 Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.195361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d921-account-create-update-shfv2" event={"ID":"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607","Type":"ContainerDied","Data":"f6a65d674c18b4d91e1a4a5378741c663bb46842c68ee5b840ab49a144aef022"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.195395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d921-account-create-update-shfv2" event={"ID":"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607","Type":"ContainerStarted","Data":"0488d82d62aae3b848d73ce68527757f78ac4e24690c4bfdbb4078b5c06546b4"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.783663 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.170701 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.205693 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.223621 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.225494 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.227127 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.237267 4869 generic.go:334] "Generic (PLEG): container finished" podID="5b666475-dc9a-41e9-b087-b2042c2dd80f" containerID="e2b3a08d13bb54ca12a353c801a13c65fca6c0e6e63916392001244a909d1156" exitCode=0 Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.237335 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2vhkx" event={"ID":"5b666475-dc9a-41e9-b087-b2042c2dd80f","Type":"ContainerDied","Data":"e2b3a08d13bb54ca12a353c801a13c65fca6c0e6e63916392001244a909d1156"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.239139 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.276187 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerStarted","Data":"1afcccd94d0ae4b407fdf8e32cfa845c1df5d114a1c85b8851a8082600f3c817"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.280574 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37","Type":"ContainerStarted","Data":"f9d4129a4b135e4d9ca0c9026d3686e0e559273d968fe5246bfa69cd577729e7"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.280612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37","Type":"ContainerStarted","Data":"6abb4698df2580c10205409433ce54feee7c83af065a970024a448e0ecc48940"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.289179 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.290753 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerStarted","Data":"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.324035 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.331211 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.3074167230000002 podStartE2EDuration="4.331180956s" podCreationTimestamp="2026-02-02 15:24:08 +0000 UTC" firstStartedPulling="2026-02-02 15:24:10.061188275 +0000 UTC m=+3051.705825045" lastFinishedPulling="2026-02-02 15:24:11.084952508 +0000 UTC m=+3052.729589278" observedRunningTime="2026-02-02 15:24:12.310029938 +0000 UTC m=+3053.954666708" watchObservedRunningTime="2026-02-02 15:24:12.331180956 +0000 UTC m=+3053.975817716" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340349 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.355762 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6bc7747c5b-j78w2"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.357354 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.370578 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bc7747c5b-j78w2"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.442008 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsdh\" (UniqueName: \"kubernetes.io/projected/8714c728-0089-451b-8335-ab32ef8c39ac-kube-api-access-pcsdh\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.442363 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.444283 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.444671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-scripts\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.444849 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-combined-ca-bundle\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.444978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.445081 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-config-data\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446155 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-secret-key\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446286 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8714c728-0089-451b-8335-ab32ef8c39ac-logs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446788 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-tls-certs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446883 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.447069 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.448264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.454604 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.455739 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.469538 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.469643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.469806 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.479679 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548700 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsdh\" (UniqueName: \"kubernetes.io/projected/8714c728-0089-451b-8335-ab32ef8c39ac-kube-api-access-pcsdh\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548763 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-scripts\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-combined-ca-bundle\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548817 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-config-data\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-secret-key\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8714c728-0089-451b-8335-ab32ef8c39ac-logs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548919 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-tls-certs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.551157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-scripts\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.551667 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-config-data\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.551666 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8714c728-0089-451b-8335-ab32ef8c39ac-logs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.557263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-secret-key\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.557621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.594491 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-tls-certs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.613568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsdh\" (UniqueName: \"kubernetes.io/projected/8714c728-0089-451b-8335-ab32ef8c39ac-kube-api-access-pcsdh\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.632234 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-combined-ca-bundle\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.688989 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.692218 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.729744 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.753983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") pod \"5b666475-dc9a-41e9-b087-b2042c2dd80f\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.754349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") pod \"5b666475-dc9a-41e9-b087-b2042c2dd80f\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.755031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b666475-dc9a-41e9-b087-b2042c2dd80f" (UID: "5b666475-dc9a-41e9-b087-b2042c2dd80f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.759450 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28" (OuterVolumeSpecName: "kube-api-access-48b28") pod "5b666475-dc9a-41e9-b087-b2042c2dd80f" (UID: "5b666475-dc9a-41e9-b087-b2042c2dd80f"). InnerVolumeSpecName "kube-api-access-48b28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.856205 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") pod \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.856379 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") pod \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.856799 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.856820 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.857324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" (UID: "8d70d6af-0f1a-40d1-b0aa-8896b8fcd607"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.864273 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266" (OuterVolumeSpecName: "kube-api-access-67266") pod "8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" (UID: "8d70d6af-0f1a-40d1-b0aa-8896b8fcd607"). InnerVolumeSpecName "kube-api-access-67266". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.959238 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.959269 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.249944 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bc7747c5b-j78w2"] Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.262560 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.301154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2vhkx" event={"ID":"5b666475-dc9a-41e9-b087-b2042c2dd80f","Type":"ContainerDied","Data":"0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.301196 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.301164 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.317159 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerStarted","Data":"bb317fe37d1fca98ae0b5bc915c94ff30a5b109bb554ebf2814b1106d864e8a6"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.318762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bc7747c5b-j78w2" event={"ID":"8714c728-0089-451b-8335-ab32ef8c39ac","Type":"ContainerStarted","Data":"88b020470bc6e0c38a73e136a5b1e9a2c001f26244bfbec5264f95f6e6f2b31f"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.322190 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerStarted","Data":"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.335260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerStarted","Data":"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.335448 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-log" containerID="cri-o://69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" gracePeriod=30 Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.335499 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-httpd" containerID="cri-o://71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" gracePeriod=30 Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.349354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ffb18e2a-67e6-4932-97fb-dd57b66f6c93","Type":"ContainerStarted","Data":"2af17ad0c7dda96215a13938bcace47860a44d057efe2c08c33d929939e077f9"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.352050 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.352058 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d921-account-create-update-shfv2" event={"ID":"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607","Type":"ContainerDied","Data":"0488d82d62aae3b848d73ce68527757f78ac4e24690c4bfdbb4078b5c06546b4"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.352121 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0488d82d62aae3b848d73ce68527757f78ac4e24690c4bfdbb4078b5c06546b4" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.373416 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.373396179 podStartE2EDuration="4.373396179s" podCreationTimestamp="2026-02-02 15:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:13.361326563 +0000 UTC m=+3055.005963343" watchObservedRunningTime="2026-02-02 15:24:13.373396179 +0000 UTC m=+3055.018032949" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.466357 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:13 crc kubenswrapper[4869]: E0202 15:24:13.467065 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.956861 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092076 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092258 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092343 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092360 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092389 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.094203 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.094218 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs" (OuterVolumeSpecName: "logs") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.098647 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.099264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8" (OuterVolumeSpecName: "kube-api-access-2kdq8") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "kube-api-access-2kdq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.099378 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts" (OuterVolumeSpecName: "scripts") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.101960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph" (OuterVolumeSpecName: "ceph") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.127329 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.178178 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data" (OuterVolumeSpecName: "config-data") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206805 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206836 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206850 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206857 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206867 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206874 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206882 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206891 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.229707 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.233678 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.251346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.308673 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.308709 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.371304 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-log" containerID="cri-o://420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" gracePeriod=30 Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.371604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerStarted","Data":"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.371884 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-httpd" containerID="cri-o://44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" gracePeriod=30 Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374757 4869 generic.go:334] "Generic (PLEG): container finished" podID="94981156-d105-463b-90e1-db9b2dbbb853" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" exitCode=0 Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374788 4869 generic.go:334] "Generic (PLEG): container finished" podID="94981156-d105-463b-90e1-db9b2dbbb853" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" exitCode=143 Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374861 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerDied","Data":"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerDied","Data":"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374900 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerDied","Data":"57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374931 4869 scope.go:117] "RemoveContainer" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.375068 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.399392 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ffb18e2a-67e6-4932-97fb-dd57b66f6c93","Type":"ContainerStarted","Data":"971ea371362a10335e31b3b88f5517683d06a7c5420335425391975c903d9b60"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.402733 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.402707287 podStartE2EDuration="5.402707287s" podCreationTimestamp="2026-02-02 15:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:14.400128294 +0000 UTC m=+3056.044765054" watchObservedRunningTime="2026-02-02 15:24:14.402707287 +0000 UTC m=+3056.047344057" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.430071 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.8020294329999995 podStartE2EDuration="6.430051067s" podCreationTimestamp="2026-02-02 15:24:08 +0000 UTC" firstStartedPulling="2026-02-02 15:24:11.030384392 +0000 UTC m=+3052.675021162" lastFinishedPulling="2026-02-02 15:24:12.658406026 +0000 UTC m=+3054.303042796" observedRunningTime="2026-02-02 15:24:14.424502451 +0000 UTC m=+3056.069139251" watchObservedRunningTime="2026-02-02 15:24:14.430051067 +0000 UTC m=+3056.074687837" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.452844 4869 scope.go:117] "RemoveContainer" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.465381 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.482095 4869 scope.go:117] "RemoveContainer" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.484127 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": container with ID starting with 71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc not found: ID does not exist" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.484176 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc"} err="failed to get container status \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": rpc error: code = NotFound desc = could not find container \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": container with ID starting with 71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc not found: ID does not exist" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.484207 4869 scope.go:117] "RemoveContainer" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.485006 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": container with ID starting with 69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4 not found: ID does not exist" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485033 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4"} err="failed to get container status \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": rpc error: code = NotFound desc = could not find container \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": container with ID starting with 69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4 not found: ID does not exist" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485050 4869 scope.go:117] "RemoveContainer" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485184 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485726 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc"} err="failed to get container status \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": rpc error: code = NotFound desc = could not find container \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": container with ID starting with 71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc not found: ID does not exist" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485750 4869 scope.go:117] "RemoveContainer" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485999 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4"} err="failed to get container status \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": rpc error: code = NotFound desc = could not find container \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": container with ID starting with 69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4 not found: ID does not exist" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.513899 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.514651 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-log" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.514677 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-log" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.514700 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b666475-dc9a-41e9-b087-b2042c2dd80f" containerName="mariadb-database-create" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.514709 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b666475-dc9a-41e9-b087-b2042c2dd80f" containerName="mariadb-database-create" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.514734 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" containerName="mariadb-account-create-update" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.514745 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" containerName="mariadb-account-create-update" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.514767 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-httpd" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.514774 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-httpd" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.515094 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b666475-dc9a-41e9-b087-b2042c2dd80f" containerName="mariadb-database-create" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.515126 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-httpd" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.515147 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-log" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.515160 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" containerName="mariadb-account-create-update" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.516831 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.520218 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.520718 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.522797 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623534 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623607 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623657 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-logs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623690 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-ceph\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623927 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfphl\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-kube-api-access-cfphl\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727236 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-ceph\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfphl\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-kube-api-access-cfphl\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727356 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-logs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727502 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727552 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727573 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.730422 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.734123 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.734394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-logs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.734478 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.735006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-ceph\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.736523 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.740300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.748798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.754938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfphl\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-kube-api-access-cfphl\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.771886 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.874440 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.992321 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.999416 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.010173 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.011135 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-gtk54" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.011891 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.046715 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.046852 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.046947 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.046981 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.151120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.151199 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.151222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.151300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.158617 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.164960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.176044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.188571 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.263098 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.346095 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353838 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353933 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354066 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354155 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354228 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354714 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.355365 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs" (OuterVolumeSpecName: "logs") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.359558 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts" (OuterVolumeSpecName: "scripts") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.359821 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.365154 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph" (OuterVolumeSpecName: "ceph") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.379515 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb" (OuterVolumeSpecName: "kube-api-access-9rjnb") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "kube-api-access-9rjnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.394450 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.434335 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442682 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" exitCode=0 Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442708 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" exitCode=143 Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442768 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerDied","Data":"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a"} Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerDied","Data":"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61"} Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerDied","Data":"1afcccd94d0ae4b407fdf8e32cfa845c1df5d114a1c85b8851a8082600f3c817"} Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442845 4869 scope.go:117] "RemoveContainer" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.443073 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456173 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456196 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456218 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456229 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456238 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456247 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456257 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456266 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.486291 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94981156-d105-463b-90e1-db9b2dbbb853" path="/var/lib/kubelet/pods/94981156-d105-463b-90e1-db9b2dbbb853/volumes" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.494435 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.504719 4869 scope.go:117] "RemoveContainer" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.520073 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data" (OuterVolumeSpecName: "config-data") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.542763 4869 scope.go:117] "RemoveContainer" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" Feb 02 15:24:15 crc kubenswrapper[4869]: E0202 15:24:15.543179 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": container with ID starting with 44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a not found: ID does not exist" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543212 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a"} err="failed to get container status \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": rpc error: code = NotFound desc = could not find container \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": container with ID starting with 44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a not found: ID does not exist" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543241 4869 scope.go:117] "RemoveContainer" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" Feb 02 15:24:15 crc kubenswrapper[4869]: E0202 15:24:15.543448 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": container with ID starting with 420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61 not found: ID does not exist" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543470 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61"} err="failed to get container status \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": rpc error: code = NotFound desc = could not find container \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": container with ID starting with 420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61 not found: ID does not exist" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543487 4869 scope.go:117] "RemoveContainer" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543666 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a"} err="failed to get container status \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": rpc error: code = NotFound desc = could not find container \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": container with ID starting with 44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a not found: ID does not exist" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543696 4869 scope.go:117] "RemoveContainer" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.544018 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61"} err="failed to get container status \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": rpc error: code = NotFound desc = could not find container \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": container with ID starting with 420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61 not found: ID does not exist" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.549086 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.558470 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.558502 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.788419 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.796735 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.811369 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: E0202 15:24:15.815569 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-log" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.815594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-log" Feb 02 15:24:15 crc kubenswrapper[4869]: E0202 15:24:15.815608 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-httpd" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.815616 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-httpd" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.816138 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-log" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.816152 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-httpd" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.817522 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.823811 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.826499 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.827931 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865386 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxvd\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-kube-api-access-tvxvd\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865589 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.922524 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.966963 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967021 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxvd\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-kube-api-access-tvxvd\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967076 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967142 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967232 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.970186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.970413 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.970476 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.972609 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.974487 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.976567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.978121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.988662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.992429 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxvd\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-kube-api-access-tvxvd\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.004484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.136382 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.482048 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jf2x2" event={"ID":"d8b453d3-88d6-4fd5-bedc-62e0d4270f20","Type":"ContainerStarted","Data":"b1c4627ca0ca190d9e5b9123d862a6e8bc80353fedf05e6831015a4a4f791ce4"} Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.492054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6439a406-db54-421d-b5c7-5911b35cfda3","Type":"ContainerStarted","Data":"6234895e93703654cdba09b154044e2a9aadcb94c9519fae5cdcd0e6aae32ce1"} Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.800040 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:17 crc kubenswrapper[4869]: I0202 15:24:17.476478 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" path="/var/lib/kubelet/pods/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22/volumes" Feb 02 15:24:17 crc kubenswrapper[4869]: I0202 15:24:17.514664 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6439a406-db54-421d-b5c7-5911b35cfda3","Type":"ContainerStarted","Data":"aa24e45981cea8b1278e50a7fe709e50641dc1b8313f907be1b6ff84c40bfe67"} Feb 02 15:24:17 crc kubenswrapper[4869]: I0202 15:24:17.514717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6439a406-db54-421d-b5c7-5911b35cfda3","Type":"ContainerStarted","Data":"e9b7c37f5dd6e0ffba322c14393a839bd9de8e92d96f01031a371abfff466c3f"} Feb 02 15:24:17 crc kubenswrapper[4869]: I0202 15:24:17.554616 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.554600956 podStartE2EDuration="3.554600956s" podCreationTimestamp="2026-02-02 15:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:17.534671628 +0000 UTC m=+3059.179308398" watchObservedRunningTime="2026-02-02 15:24:17.554600956 +0000 UTC m=+3059.199237726" Feb 02 15:24:19 crc kubenswrapper[4869]: I0202 15:24:19.342979 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 02 15:24:19 crc kubenswrapper[4869]: I0202 15:24:19.430590 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:19 crc kubenswrapper[4869]: I0202 15:24:19.602699 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 02 15:24:22 crc kubenswrapper[4869]: W0202 15:24:22.055647 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4f5a226_bdff_4182_971c_e3a22264a7d6.slice/crio-725e7a1e5863de2cde112911f3c462b95e2f2b823c99f92501cd170329913238 WatchSource:0}: Error finding container 725e7a1e5863de2cde112911f3c462b95e2f2b823c99f92501cd170329913238: Status 404 returned error can't find the container with id 725e7a1e5863de2cde112911f3c462b95e2f2b823c99f92501cd170329913238 Feb 02 15:24:22 crc kubenswrapper[4869]: I0202 15:24:22.562100 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4f5a226-bdff-4182-971c-e3a22264a7d6","Type":"ContainerStarted","Data":"725e7a1e5863de2cde112911f3c462b95e2f2b823c99f92501cd170329913238"} Feb 02 15:24:24 crc kubenswrapper[4869]: I0202 15:24:24.875779 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 15:24:24 crc kubenswrapper[4869]: I0202 15:24:24.876395 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 15:24:24 crc kubenswrapper[4869]: I0202 15:24:24.979735 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 15:24:24 crc kubenswrapper[4869]: I0202 15:24:24.987381 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.593345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4f5a226-bdff-4182-971c-e3a22264a7d6","Type":"ContainerStarted","Data":"f61d9fbf5f53654cab8f027de80001582ffe118f15af983135ef49928bf0260e"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.596110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerStarted","Data":"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.596155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerStarted","Data":"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.600872 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bc7747c5b-j78w2" event={"ID":"8714c728-0089-451b-8335-ab32ef8c39ac","Type":"ContainerStarted","Data":"dbdb3ed5bc4906a409e00c9fb4f60c43ae1a1ef35da26139ad274f01a262a6a3"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.600926 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bc7747c5b-j78w2" event={"ID":"8714c728-0089-451b-8335-ab32ef8c39ac","Type":"ContainerStarted","Data":"56310bc7a94d5d1ce987814af1e280656dcc3680b558e4e3eb45fea86ee388fe"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.603952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerStarted","Data":"8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.603992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerStarted","Data":"e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.604108 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74c696d745-m9v9m" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon-log" containerID="cri-o://e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db" gracePeriod=30 Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.604729 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74c696d745-m9v9m" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon" containerID="cri-o://8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba" gracePeriod=30 Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.613178 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jf2x2" event={"ID":"d8b453d3-88d6-4fd5-bedc-62e0d4270f20","Type":"ContainerStarted","Data":"5948d840f279d95c368e5ad5e8fcf13a024cb24a66d211ff6dee2d8bb1e46f72"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.624633 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74748d768-vjhn2" podStartSLOduration=2.290035868 podStartE2EDuration="13.624613383s" podCreationTimestamp="2026-02-02 15:24:12 +0000 UTC" firstStartedPulling="2026-02-02 15:24:13.286773878 +0000 UTC m=+3054.931410648" lastFinishedPulling="2026-02-02 15:24:24.621351333 +0000 UTC m=+3066.265988163" observedRunningTime="2026-02-02 15:24:25.619800506 +0000 UTC m=+3067.264437316" watchObservedRunningTime="2026-02-02 15:24:25.624613383 +0000 UTC m=+3067.269250173" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626143 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d66c5779c-pggjz" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon-log" containerID="cri-o://ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e" gracePeriod=30 Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626343 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d66c5779c-pggjz" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon" containerID="cri-o://790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7" gracePeriod=30 Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626869 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerStarted","Data":"790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626901 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerStarted","Data":"ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626993 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.627010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.645735 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-jf2x2" podStartSLOduration=2.9657750419999998 podStartE2EDuration="11.64571559s" podCreationTimestamp="2026-02-02 15:24:14 +0000 UTC" firstStartedPulling="2026-02-02 15:24:15.941340803 +0000 UTC m=+3057.585977573" lastFinishedPulling="2026-02-02 15:24:24.621281351 +0000 UTC m=+3066.265918121" observedRunningTime="2026-02-02 15:24:25.642105061 +0000 UTC m=+3067.286741851" watchObservedRunningTime="2026-02-02 15:24:25.64571559 +0000 UTC m=+3067.290352360" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.665006 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6bc7747c5b-j78w2" podStartSLOduration=2.329970607 podStartE2EDuration="13.664971151s" podCreationTimestamp="2026-02-02 15:24:12 +0000 UTC" firstStartedPulling="2026-02-02 15:24:13.286209875 +0000 UTC m=+3054.930846645" lastFinishedPulling="2026-02-02 15:24:24.621210419 +0000 UTC m=+3066.265847189" observedRunningTime="2026-02-02 15:24:25.659704912 +0000 UTC m=+3067.304341682" watchObservedRunningTime="2026-02-02 15:24:25.664971151 +0000 UTC m=+3067.309607941" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.684506 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74c696d745-m9v9m" podStartSLOduration=2.8311714759999997 podStartE2EDuration="16.684483769s" podCreationTimestamp="2026-02-02 15:24:09 +0000 UTC" firstStartedPulling="2026-02-02 15:24:10.762476303 +0000 UTC m=+3052.407113073" lastFinishedPulling="2026-02-02 15:24:24.615788596 +0000 UTC m=+3066.260425366" observedRunningTime="2026-02-02 15:24:25.680126602 +0000 UTC m=+3067.324763382" watchObservedRunningTime="2026-02-02 15:24:25.684483769 +0000 UTC m=+3067.329120539" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.710295 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d66c5779c-pggjz" podStartSLOduration=2.865635129 podStartE2EDuration="16.71027171s" podCreationTimestamp="2026-02-02 15:24:09 +0000 UTC" firstStartedPulling="2026-02-02 15:24:10.776989019 +0000 UTC m=+3052.421625789" lastFinishedPulling="2026-02-02 15:24:24.6216256 +0000 UTC m=+3066.266262370" observedRunningTime="2026-02-02 15:24:25.706365004 +0000 UTC m=+3067.351001774" watchObservedRunningTime="2026-02-02 15:24:25.71027171 +0000 UTC m=+3067.354908480" Feb 02 15:24:26 crc kubenswrapper[4869]: I0202 15:24:26.462270 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:26 crc kubenswrapper[4869]: E0202 15:24:26.462799 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:26 crc kubenswrapper[4869]: I0202 15:24:26.635421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4f5a226-bdff-4182-971c-e3a22264a7d6","Type":"ContainerStarted","Data":"632a991072605ccdb319651bb13ce3e2e907da3751ea2ca2a84d008da38a6a16"} Feb 02 15:24:26 crc kubenswrapper[4869]: I0202 15:24:26.659188 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.659169689 podStartE2EDuration="11.659169689s" podCreationTimestamp="2026-02-02 15:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:26.65511614 +0000 UTC m=+3068.299752910" watchObservedRunningTime="2026-02-02 15:24:26.659169689 +0000 UTC m=+3068.303806449" Feb 02 15:24:27 crc kubenswrapper[4869]: I0202 15:24:27.643216 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 15:24:27 crc kubenswrapper[4869]: I0202 15:24:27.643523 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 15:24:29 crc kubenswrapper[4869]: I0202 15:24:29.883683 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:29 crc kubenswrapper[4869]: I0202 15:24:29.935030 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 15:24:29 crc kubenswrapper[4869]: I0202 15:24:29.935152 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 15:24:29 crc kubenswrapper[4869]: I0202 15:24:29.937555 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 15:24:30 crc kubenswrapper[4869]: I0202 15:24:30.017372 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:32 crc kubenswrapper[4869]: I0202 15:24:32.558527 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:32 crc kubenswrapper[4869]: I0202 15:24:32.558879 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:32 crc kubenswrapper[4869]: I0202 15:24:32.690572 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:32 crc kubenswrapper[4869]: I0202 15:24:32.691514 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.137461 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.137862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.184627 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.202915 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.736477 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.736518 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:37 crc kubenswrapper[4869]: I0202 15:24:37.462693 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:37 crc kubenswrapper[4869]: E0202 15:24:37.463264 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:38 crc kubenswrapper[4869]: I0202 15:24:38.700585 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:38 crc kubenswrapper[4869]: I0202 15:24:38.734143 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:40 crc kubenswrapper[4869]: I0202 15:24:40.803090 4869 generic.go:334] "Generic (PLEG): container finished" podID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" containerID="5948d840f279d95c368e5ad5e8fcf13a024cb24a66d211ff6dee2d8bb1e46f72" exitCode=0 Feb 02 15:24:40 crc kubenswrapper[4869]: I0202 15:24:40.803325 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jf2x2" event={"ID":"d8b453d3-88d6-4fd5-bedc-62e0d4270f20","Type":"ContainerDied","Data":"5948d840f279d95c368e5ad5e8fcf13a024cb24a66d211ff6dee2d8bb1e46f72"} Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.235188 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.268779 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") pod \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.268837 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") pod \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.268942 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") pod \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.269081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") pod \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.274982 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c" (OuterVolumeSpecName: "kube-api-access-j297c") pod "d8b453d3-88d6-4fd5-bedc-62e0d4270f20" (UID: "d8b453d3-88d6-4fd5-bedc-62e0d4270f20"). InnerVolumeSpecName "kube-api-access-j297c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.279279 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "d8b453d3-88d6-4fd5-bedc-62e0d4270f20" (UID: "d8b453d3-88d6-4fd5-bedc-62e0d4270f20"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.287125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data" (OuterVolumeSpecName: "config-data") pod "d8b453d3-88d6-4fd5-bedc-62e0d4270f20" (UID: "d8b453d3-88d6-4fd5-bedc-62e0d4270f20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.297275 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8b453d3-88d6-4fd5-bedc-62e0d4270f20" (UID: "d8b453d3-88d6-4fd5-bedc-62e0d4270f20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.371195 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.371228 4869 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.371237 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.371248 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.559770 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.692476 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6bc7747c5b-j78w2" podUID="8714c728-0089-451b-8335-ab32ef8c39ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.820863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jf2x2" event={"ID":"d8b453d3-88d6-4fd5-bedc-62e0d4270f20","Type":"ContainerDied","Data":"b1c4627ca0ca190d9e5b9123d862a6e8bc80353fedf05e6831015a4a4f791ce4"} Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.820902 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1c4627ca0ca190d9e5b9123d862a6e8bc80353fedf05e6831015a4a4f791ce4" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.820971 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.167889 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: E0202 15:24:43.168411 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" containerName="manila-db-sync" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.168435 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" containerName="manila-db-sync" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.168709 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" containerName="manila-db-sync" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.169996 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.173540 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.173699 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.173794 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.173860 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-gtk54" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189742 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189857 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189933 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189953 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.203189 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.230238 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.231720 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.236668 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290743 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290805 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290860 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290886 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291304 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291321 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291342 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291464 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291502 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.294865 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.300403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.301238 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.313832 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.315220 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.329064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.329309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.387971 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5kt5g"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.410901 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.442929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5kt5g"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449628 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449820 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449861 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654fm\" (UniqueName: \"kubernetes.io/projected/2d493264-07c6-4809-9a3e-809e60997896-kube-api-access-654fm\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449933 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449962 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-config\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450013 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450053 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450162 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.451506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.451549 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.467872 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.471665 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.474533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.479537 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.480200 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.502424 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.504533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.538189 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.540190 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.546061 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654fm\" (UniqueName: \"kubernetes.io/projected/2d493264-07c6-4809-9a3e-809e60997896-kube-api-access-654fm\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-config\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552826 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.553267 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.553949 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-config\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.559450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.563547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.563592 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.563900 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.564415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.575654 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654fm\" (UniqueName: \"kubernetes.io/projected/2d493264-07c6-4809-9a3e-809e60997896-kube-api-access-654fm\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.605338 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656530 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656600 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656704 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656801 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656896 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656954 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.758688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.758969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759049 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759159 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759194 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759707 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.760697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.782466 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.796687 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.796730 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.796694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.797247 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:43.932238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.434658 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.502169 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.700685 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5kt5g"] Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.833672 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.869831 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerStarted","Data":"3af6ab75a56f8bed06c1d0bc83b535b2352c23686aa45e49a7bac1b6f3b2b711"} Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.871377 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerStarted","Data":"a5dd2b6085a889dc98e2fb099d3063bc3e713c383fe9013a6e33aac2e5968482"} Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.872577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" event={"ID":"2d493264-07c6-4809-9a3e-809e60997896","Type":"ContainerStarted","Data":"0af54a4c5cfceed254885ffe8b56a8d2ad390290b0f4d7e1cc9abf8392e0cfd6"} Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.411391 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.888304 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerStarted","Data":"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2"} Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.888583 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerStarted","Data":"b76b0402055bbe916acc9c514573c63133b5f78cbe7cb50685001cf6af0e5d07"} Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.892369 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d493264-07c6-4809-9a3e-809e60997896" containerID="daf9bbfc3311debaff2b01a5093e0472118daf0097059296dbbc8754ec88d996" exitCode=0 Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.892405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" event={"ID":"2d493264-07c6-4809-9a3e-809e60997896","Type":"ContainerDied","Data":"daf9bbfc3311debaff2b01a5093e0472118daf0097059296dbbc8754ec88d996"} Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.944151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" event={"ID":"2d493264-07c6-4809-9a3e-809e60997896","Type":"ContainerStarted","Data":"bae86ceaafa3eeec39dce3c0c4ccb28223cd4c297aed6a1d3741a7087742cdc9"} Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.946172 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.968777 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerStarted","Data":"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a"} Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.968822 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.968834 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api" containerID="cri-o://3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" gracePeriod=30 Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.968847 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api-log" containerID="cri-o://276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" gracePeriod=30 Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.974715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerStarted","Data":"4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9"} Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.974748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerStarted","Data":"dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41"} Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.001268 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" podStartSLOduration=5.001249228 podStartE2EDuration="5.001249228s" podCreationTimestamp="2026-02-02 15:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:47.981374342 +0000 UTC m=+3089.626011112" watchObservedRunningTime="2026-02-02 15:24:48.001249228 +0000 UTC m=+3089.645885998" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.065609 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.293904732 podStartE2EDuration="5.065579503s" podCreationTimestamp="2026-02-02 15:24:43 +0000 UTC" firstStartedPulling="2026-02-02 15:24:45.426757914 +0000 UTC m=+3087.071394684" lastFinishedPulling="2026-02-02 15:24:46.198432685 +0000 UTC m=+3087.843069455" observedRunningTime="2026-02-02 15:24:48.046272 +0000 UTC m=+3089.690908770" watchObservedRunningTime="2026-02-02 15:24:48.065579503 +0000 UTC m=+3089.710216283" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.088429 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.088411782 podStartE2EDuration="5.088411782s" podCreationTimestamp="2026-02-02 15:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:48.086460835 +0000 UTC m=+3089.731097605" watchObservedRunningTime="2026-02-02 15:24:48.088411782 +0000 UTC m=+3089.733048552" Feb 02 15:24:48 crc kubenswrapper[4869]: E0202 15:24:48.254098 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67738938_12ff_40e9_8c30_d0993939eafb.slice/crio-276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67738938_12ff_40e9_8c30_d0993939eafb.slice/crio-conmon-276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2.scope\": RecentStats: unable to find data in memory cache]" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.770092 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907213 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907443 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907554 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907603 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.908008 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.909727 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs" (OuterVolumeSpecName: "logs") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.916211 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts" (OuterVolumeSpecName: "scripts") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.921075 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb" (OuterVolumeSpecName: "kube-api-access-jxxdb") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "kube-api-access-jxxdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.930308 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.972886 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data" (OuterVolumeSpecName: "config-data") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990789 4869 generic.go:334] "Generic (PLEG): container finished" podID="67738938-12ff-40e9-8c30-d0993939eafb" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" exitCode=0 Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990822 4869 generic.go:334] "Generic (PLEG): container finished" podID="67738938-12ff-40e9-8c30-d0993939eafb" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" exitCode=143 Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990845 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerDied","Data":"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a"} Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990887 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990950 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerDied","Data":"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2"} Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990990 4869 scope.go:117] "RemoveContainer" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.991087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerDied","Data":"b76b0402055bbe916acc9c514573c63133b5f78cbe7cb50685001cf6af0e5d07"} Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.995089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009884 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009935 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009947 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009959 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009968 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009976 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.099743 4869 scope.go:117] "RemoveContainer" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.126603 4869 scope.go:117] "RemoveContainer" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.127163 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": container with ID starting with 3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a not found: ID does not exist" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.127211 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a"} err="failed to get container status \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": rpc error: code = NotFound desc = could not find container \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": container with ID starting with 3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a not found: ID does not exist" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.127238 4869 scope.go:117] "RemoveContainer" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.130273 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": container with ID starting with 276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2 not found: ID does not exist" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.130318 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2"} err="failed to get container status \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": rpc error: code = NotFound desc = could not find container \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": container with ID starting with 276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2 not found: ID does not exist" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.130345 4869 scope.go:117] "RemoveContainer" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.130871 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a"} err="failed to get container status \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": rpc error: code = NotFound desc = could not find container \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": container with ID starting with 3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a not found: ID does not exist" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.130953 4869 scope.go:117] "RemoveContainer" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.132262 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2"} err="failed to get container status \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": rpc error: code = NotFound desc = could not find container \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": container with ID starting with 276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2 not found: ID does not exist" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.340185 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.351290 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.363935 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.364454 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api-log" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.364469 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api-log" Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.364494 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.364502 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.364750 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api-log" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.364774 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.366086 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.375131 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.382524 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.382658 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.382728 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.469424 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.469702 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.510310 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67738938-12ff-40e9-8c30-d0993939eafb" path="/var/lib/kubelet/pods/67738938-12ff-40e9-8c30-d0993939eafb/volumes" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527454 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrnc\" (UniqueName: \"kubernetes.io/projected/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-kube-api-access-kxrnc\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527548 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527672 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data-custom\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527744 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-etc-machine-id\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527861 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-public-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527957 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-scripts\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.528047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-logs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.528138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.629660 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.630734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data-custom\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.630841 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-etc-machine-id\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.630904 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-etc-machine-id\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631074 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-public-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-scripts\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631286 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-logs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631400 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxrnc\" (UniqueName: \"kubernetes.io/projected/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-kube-api-access-kxrnc\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.634841 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-public-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.634959 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data-custom\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.635195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-logs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.635679 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.636963 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-scripts\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.637641 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.651152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.651503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxrnc\" (UniqueName: \"kubernetes.io/projected/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-kube-api-access-kxrnc\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.701143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:50 crc kubenswrapper[4869]: I0202 15:24:50.303648 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:50 crc kubenswrapper[4869]: W0202 15:24:50.305520 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68d3a7fe_1a89_4d45_9ffd_8057e313d3e9.slice/crio-834077631b4985d984f999ccd80ba4929d43543f375251b4b743e016ccc2f1a6 WatchSource:0}: Error finding container 834077631b4985d984f999ccd80ba4929d43543f375251b4b743e016ccc2f1a6: Status 404 returned error can't find the container with id 834077631b4985d984f999ccd80ba4929d43543f375251b4b743e016ccc2f1a6 Feb 02 15:24:51 crc kubenswrapper[4869]: I0202 15:24:51.027772 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9","Type":"ContainerStarted","Data":"e8f31c4603a24ff86c886e9397b2233da011fffea9ade621ff2084364663d387"} Feb 02 15:24:51 crc kubenswrapper[4869]: I0202 15:24:51.028145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9","Type":"ContainerStarted","Data":"834077631b4985d984f999ccd80ba4929d43543f375251b4b743e016ccc2f1a6"} Feb 02 15:24:52 crc kubenswrapper[4869]: I0202 15:24:52.050752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9","Type":"ContainerStarted","Data":"6da282bcfb7b5348e18133c3cc81a9ecd307f63f23f02853a371f454c1dc053b"} Feb 02 15:24:52 crc kubenswrapper[4869]: I0202 15:24:52.051066 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 15:24:52 crc kubenswrapper[4869]: I0202 15:24:52.099531 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.099512555 podStartE2EDuration="3.099512555s" podCreationTimestamp="2026-02-02 15:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:52.088033694 +0000 UTC m=+3093.732670464" watchObservedRunningTime="2026-02-02 15:24:52.099512555 +0000 UTC m=+3093.744149325" Feb 02 15:24:53 crc kubenswrapper[4869]: I0202 15:24:53.503823 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 15:24:53 crc kubenswrapper[4869]: I0202 15:24:53.607100 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:53 crc kubenswrapper[4869]: I0202 15:24:53.699279 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 15:24:53 crc kubenswrapper[4869]: I0202 15:24:53.699981 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="dnsmasq-dns" containerID="cri-o://f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8" gracePeriod=10 Feb 02 15:24:54 crc kubenswrapper[4869]: I0202 15:24:54.077268 4869 generic.go:334] "Generic (PLEG): container finished" podID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerID="f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8" exitCode=0 Feb 02 15:24:54 crc kubenswrapper[4869]: I0202 15:24:54.077320 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerDied","Data":"f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8"} Feb 02 15:24:54 crc kubenswrapper[4869]: I0202 15:24:54.947371 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078346 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078396 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078744 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078771 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.088104 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj" (OuterVolumeSpecName: "kube-api-access-898pj") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "kube-api-access-898pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.115306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerDied","Data":"f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12"} Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.115507 4869 scope.go:117] "RemoveContainer" containerID="f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.115631 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.133494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.142023 4869 scope.go:117] "RemoveContainer" containerID="267d2b5ca4d238e5b769ca48e7a762954290c341c2ea35ac8b67c09d6240f345" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.171703 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.182295 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.182329 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.182341 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.186954 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config" (OuterVolumeSpecName: "config") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.188864 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.200630 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.219011 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.253123 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.284134 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.284167 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.284180 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.478209 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.478406 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.153251 4869 generic.go:334] "Generic (PLEG): container finished" podID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerID="8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba" exitCode=137 Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.153705 4869 generic.go:334] "Generic (PLEG): container finished" podID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerID="e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db" exitCode=137 Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.153745 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerDied","Data":"8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.153770 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerDied","Data":"e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.157158 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerStarted","Data":"cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.157181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerStarted","Data":"bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.160510 4869 generic.go:334] "Generic (PLEG): container finished" podID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerID="790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7" exitCode=137 Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.160526 4869 generic.go:334] "Generic (PLEG): container finished" podID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerID="ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e" exitCode=137 Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.160542 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerDied","Data":"790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.160559 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerDied","Data":"ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.251636 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.286646 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.287711 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.272438327 podStartE2EDuration="13.287694232s" podCreationTimestamp="2026-02-02 15:24:43 +0000 UTC" firstStartedPulling="2026-02-02 15:24:45.52995468 +0000 UTC m=+3087.174591440" lastFinishedPulling="2026-02-02 15:24:54.545210575 +0000 UTC m=+3096.189847345" observedRunningTime="2026-02-02 15:24:56.184096896 +0000 UTC m=+3097.828733676" watchObservedRunningTime="2026-02-02 15:24:56.287694232 +0000 UTC m=+3097.932330992" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.322812 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323101 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323379 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323446 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323391 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs" (OuterVolumeSpecName: "logs") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.324183 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.330116 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp" (OuterVolumeSpecName: "kube-api-access-9mtmp") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "kube-api-access-9mtmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.361057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.368951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts" (OuterVolumeSpecName: "scripts") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.413446 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data" (OuterVolumeSpecName: "config-data") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433764 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433832 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433857 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439177 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs" (OuterVolumeSpecName: "logs") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439614 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439634 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439644 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439652 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439661 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.460615 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8" (OuterVolumeSpecName: "kube-api-access-6s2c8") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "kube-api-access-6s2c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.497589 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.526175 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts" (OuterVolumeSpecName: "scripts") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.547724 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data" (OuterVolumeSpecName: "config-data") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.548386 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.548406 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.548415 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.548423 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.173197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerDied","Data":"2db55e6d04f2819c1e06bcde8e721cfa825f9601f520cf4e3f6565c2aaa1d4aa"} Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.174652 4869 scope.go:117] "RemoveContainer" containerID="8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.173229 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.179326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerDied","Data":"1e3835ffee852cf7e2e461dbfd0c1bce873454f7dd01eb6e5bb8f0bd42308327"} Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.179368 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.235060 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.251878 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.267565 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.279242 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.384195 4869 scope.go:117] "RemoveContainer" containerID="e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.409747 4869 scope.go:117] "RemoveContainer" containerID="790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.519611 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" path="/var/lib/kubelet/pods/886da892-6808-4ff8-8fa4-48ad9cd65843/volumes" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.520580 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" path="/var/lib/kubelet/pods/c9b2c09c-26a4-44f4-8dad-d90ef99b6972/volumes" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.521772 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" path="/var/lib/kubelet/pods/f3598164-68b7-40fe-91ce-d4cf2fa64757/volumes" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.631881 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.651975 4869 scope.go:117] "RemoveContainer" containerID="ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.744768 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.758472 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon-log" containerID="cri-o://9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.759140 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" containerID="cri-o://1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.774893 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.781901 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.782253 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-central-agent" containerID="cri-o://f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.782428 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" containerID="cri-o://75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.782488 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="sg-core" containerID="cri-o://cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.782537 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-notification-agent" containerID="cri-o://0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" gracePeriod=30 Feb 02 15:24:58 crc kubenswrapper[4869]: I0202 15:24:58.224170 4869 generic.go:334] "Generic (PLEG): container finished" podID="d49257d3-a8ff-4242-b438-86da53133fb3" containerID="75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" exitCode=0 Feb 02 15:24:58 crc kubenswrapper[4869]: I0202 15:24:58.225430 4869 generic.go:334] "Generic (PLEG): container finished" podID="d49257d3-a8ff-4242-b438-86da53133fb3" containerID="cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" exitCode=2 Feb 02 15:24:58 crc kubenswrapper[4869]: I0202 15:24:58.224366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e"} Feb 02 15:24:58 crc kubenswrapper[4869]: I0202 15:24:58.225554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4"} Feb 02 15:24:59 crc kubenswrapper[4869]: I0202 15:24:59.241692 4869 generic.go:334] "Generic (PLEG): container finished" podID="d49257d3-a8ff-4242-b438-86da53133fb3" containerID="f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" exitCode=0 Feb 02 15:24:59 crc kubenswrapper[4869]: I0202 15:24:59.241773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2"} Feb 02 15:25:00 crc kubenswrapper[4869]: I0202 15:25:00.941036 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:37516->10.217.0.247:8443: read: connection reset by peer" Feb 02 15:25:01 crc kubenswrapper[4869]: I0202 15:25:01.152649 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.188:3000/\": dial tcp 10.217.0.188:3000: connect: connection refused" Feb 02 15:25:02 crc kubenswrapper[4869]: I0202 15:25:02.559209 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Feb 02 15:25:03 crc kubenswrapper[4869]: I0202 15:25:03.276453 4869 generic.go:334] "Generic (PLEG): container finished" podID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerID="1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" exitCode=0 Feb 02 15:25:03 crc kubenswrapper[4869]: I0202 15:25:03.276496 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerDied","Data":"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597"} Feb 02 15:25:03 crc kubenswrapper[4869]: I0202 15:25:03.563884 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.463393 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:04 crc kubenswrapper[4869]: E0202 15:25:04.464266 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.769275 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858104 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858207 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858351 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.859289 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.859574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.864087 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts" (OuterVolumeSpecName: "scripts") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.885221 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.892208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669" (OuterVolumeSpecName: "kube-api-access-86669") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "kube-api-access-86669". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.947570 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961494 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961520 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961826 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961965 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961984 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961993 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.972588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data" (OuterVolumeSpecName: "config-data") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.985626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.063991 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.064034 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.210191 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.259852 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297090 4869 generic.go:334] "Generic (PLEG): container finished" podID="d49257d3-a8ff-4242-b438-86da53133fb3" containerID="0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" exitCode=0 Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297142 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc"} Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297259 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"0796932bd84ec076e7335a7406319502760ed8351d5e889f11c65dc928821a28"} Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297280 4869 scope.go:117] "RemoveContainer" containerID="75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.298011 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="manila-scheduler" containerID="cri-o://4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9" gracePeriod=30 Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.298114 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="probe" containerID="cri-o://dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41" gracePeriod=30 Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.337828 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.345399 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.352808 4869 scope.go:117] "RemoveContainer" containerID="cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364148 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364526 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364543 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364567 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364573 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364586 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-central-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364592 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-central-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364605 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="sg-core" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364610 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="sg-core" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364622 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-notification-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364628 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-notification-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364638 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="dnsmasq-dns" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364644 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="dnsmasq-dns" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364654 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364661 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364675 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364681 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364692 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364698 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364710 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="init" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364717 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="init" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364870 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364883 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="dnsmasq-dns" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364898 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="sg-core" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364927 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364935 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364948 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364961 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-central-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364976 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-notification-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364990 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.366593 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.372544 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.372673 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.372759 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.378993 4869 scope.go:117] "RemoveContainer" containerID="0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.386159 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.436437 4869 scope.go:117] "RemoveContainer" containerID="f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470152 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2c9\" (UniqueName: \"kubernetes.io/projected/58069dba-f825-4ee3-972d-85d122369b28-kube-api-access-wt2c9\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470264 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470328 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-log-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470420 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-scripts\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-run-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470608 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-config-data\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.476375 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" path="/var/lib/kubelet/pods/d49257d3-a8ff-4242-b438-86da53133fb3/volumes" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.509128 4869 scope.go:117] "RemoveContainer" containerID="75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.509601 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e\": container with ID starting with 75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e not found: ID does not exist" containerID="75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.509643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e"} err="failed to get container status \"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e\": rpc error: code = NotFound desc = could not find container \"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e\": container with ID starting with 75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e not found: ID does not exist" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.509686 4869 scope.go:117] "RemoveContainer" containerID="cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.510168 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4\": container with ID starting with cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4 not found: ID does not exist" containerID="cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510196 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4"} err="failed to get container status \"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4\": rpc error: code = NotFound desc = could not find container \"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4\": container with ID starting with cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4 not found: ID does not exist" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510215 4869 scope.go:117] "RemoveContainer" containerID="0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.510456 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc\": container with ID starting with 0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc not found: ID does not exist" containerID="0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510494 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc"} err="failed to get container status \"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc\": rpc error: code = NotFound desc = could not find container \"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc\": container with ID starting with 0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc not found: ID does not exist" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510507 4869 scope.go:117] "RemoveContainer" containerID="f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.510697 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2\": container with ID starting with f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2 not found: ID does not exist" containerID="f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510721 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2"} err="failed to get container status \"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2\": rpc error: code = NotFound desc = could not find container \"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2\": container with ID starting with f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2 not found: ID does not exist" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572674 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-scripts\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-run-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-config-data\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2c9\" (UniqueName: \"kubernetes.io/projected/58069dba-f825-4ee3-972d-85d122369b28-kube-api-access-wt2c9\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572847 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572879 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572948 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-log-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.573852 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-run-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.574459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-log-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.578125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.578398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.578483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-scripts\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.579505 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-config-data\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.580438 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.595882 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2c9\" (UniqueName: \"kubernetes.io/projected/58069dba-f825-4ee3-972d-85d122369b28-kube-api-access-wt2c9\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.731137 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 15:25:06 crc kubenswrapper[4869]: I0202 15:25:06.310140 4869 generic.go:334] "Generic (PLEG): container finished" podID="2097f350-00d8-4077-8864-1e2f78ab718f" containerID="dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41" exitCode=0 Feb 02 15:25:06 crc kubenswrapper[4869]: I0202 15:25:06.310457 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerDied","Data":"dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41"} Feb 02 15:25:06 crc kubenswrapper[4869]: I0202 15:25:06.332179 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:06 crc kubenswrapper[4869]: W0202 15:25:06.332288 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58069dba_f825_4ee3_972d_85d122369b28.slice/crio-99e8cda2916ba3256f526f9d400e56bf0ae9d1da2c11495bca8664e40405698d WatchSource:0}: Error finding container 99e8cda2916ba3256f526f9d400e56bf0ae9d1da2c11495bca8664e40405698d: Status 404 returned error can't find the container with id 99e8cda2916ba3256f526f9d400e56bf0ae9d1da2c11495bca8664e40405698d Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.321998 4869 generic.go:334] "Generic (PLEG): container finished" podID="2097f350-00d8-4077-8864-1e2f78ab718f" containerID="4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9" exitCode=0 Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.322046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerDied","Data":"4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9"} Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.324166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"99e8cda2916ba3256f526f9d400e56bf0ae9d1da2c11495bca8664e40405698d"} Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.426100 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518411 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518514 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518532 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518581 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.521970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.527644 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.529290 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr" (OuterVolumeSpecName: "kube-api-access-hr5cr") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "kube-api-access-hr5cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.529585 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts" (OuterVolumeSpecName: "scripts") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.595134 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621775 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621820 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621832 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621846 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621858 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.657093 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data" (OuterVolumeSpecName: "config-data") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.724038 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.334133 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.334103 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerDied","Data":"3af6ab75a56f8bed06c1d0bc83b535b2352c23686aa45e49a7bac1b6f3b2b711"} Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.334522 4869 scope.go:117] "RemoveContainer" containerID="dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.339505 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"9ef443639735948af5ed4209c954021920832fd3127c205665051bb01b617b44"} Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.352972 4869 scope.go:117] "RemoveContainer" containerID="4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.392332 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.415056 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.434378 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:08 crc kubenswrapper[4869]: E0202 15:25:08.434792 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="manila-scheduler" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.434804 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="manila-scheduler" Feb 02 15:25:08 crc kubenswrapper[4869]: E0202 15:25:08.434828 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="probe" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.434835 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="probe" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.435029 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="probe" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.435047 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="manila-scheduler" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.436055 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.438827 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.446026 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.538919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52b1f1d7-270e-400d-b273-961b7142f38c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htgb4\" (UniqueName: \"kubernetes.io/projected/52b1f1d7-270e-400d-b273-961b7142f38c-kube-api-access-htgb4\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-scripts\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539749 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641331 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-scripts\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641382 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52b1f1d7-270e-400d-b273-961b7142f38c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641669 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htgb4\" (UniqueName: \"kubernetes.io/projected/52b1f1d7-270e-400d-b273-961b7142f38c-kube-api-access-htgb4\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.642962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52b1f1d7-270e-400d-b273-961b7142f38c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.651456 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.651573 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.653158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-scripts\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.665621 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.670389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htgb4\" (UniqueName: \"kubernetes.io/projected/52b1f1d7-270e-400d-b273-961b7142f38c-kube-api-access-htgb4\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.761454 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:25:09 crc kubenswrapper[4869]: I0202 15:25:09.341827 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:09 crc kubenswrapper[4869]: W0202 15:25:09.342954 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b1f1d7_270e_400d_b273_961b7142f38c.slice/crio-e0ee8bf6c2bf85c265c91460c3e6e5adf49c1dc4555bff0571d40fa712181470 WatchSource:0}: Error finding container e0ee8bf6c2bf85c265c91460c3e6e5adf49c1dc4555bff0571d40fa712181470: Status 404 returned error can't find the container with id e0ee8bf6c2bf85c265c91460c3e6e5adf49c1dc4555bff0571d40fa712181470 Feb 02 15:25:09 crc kubenswrapper[4869]: I0202 15:25:09.355381 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"d833d53a42063ffd7fc9f6f65a65ecbac948ef1dd2edc5a0153ea7eda2c4d438"} Feb 02 15:25:09 crc kubenswrapper[4869]: I0202 15:25:09.478729 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" path="/var/lib/kubelet/pods/2097f350-00d8-4077-8864-1e2f78ab718f/volumes" Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.371558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"52b1f1d7-270e-400d-b273-961b7142f38c","Type":"ContainerStarted","Data":"e15924b76dbfbb1cb39b23c02385461dc684e01ac7ea39a9c16c3e9818b7ac64"} Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.372025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"52b1f1d7-270e-400d-b273-961b7142f38c","Type":"ContainerStarted","Data":"381bf148d3237cc515b70963309987e8108c879b6a6e8c7ebda985c69ada727d"} Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.372051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"52b1f1d7-270e-400d-b273-961b7142f38c","Type":"ContainerStarted","Data":"e0ee8bf6c2bf85c265c91460c3e6e5adf49c1dc4555bff0571d40fa712181470"} Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.377257 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"df9be49cca42f67c993f5977d2c900cbf370a6ee3f97d5d5a2ab900622320942"} Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.407802 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.407783265 podStartE2EDuration="2.407783265s" podCreationTimestamp="2026-02-02 15:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:25:10.407409455 +0000 UTC m=+3112.052046235" watchObservedRunningTime="2026-02-02 15:25:10.407783265 +0000 UTC m=+3112.052420035" Feb 02 15:25:11 crc kubenswrapper[4869]: I0202 15:25:11.374648 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 02 15:25:12 crc kubenswrapper[4869]: I0202 15:25:12.558873 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Feb 02 15:25:13 crc kubenswrapper[4869]: I0202 15:25:13.403816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"ef265b77af52b5aeb03e2bd865dc5c9227c8ce7fb2220f6719b6094699495227"} Feb 02 15:25:13 crc kubenswrapper[4869]: I0202 15:25:13.404660 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.150516 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.170026 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.041207331 podStartE2EDuration="10.170009516s" podCreationTimestamp="2026-02-02 15:25:05 +0000 UTC" firstStartedPulling="2026-02-02 15:25:06.334245824 +0000 UTC m=+3107.978882594" lastFinishedPulling="2026-02-02 15:25:12.463048009 +0000 UTC m=+3114.107684779" observedRunningTime="2026-02-02 15:25:13.44636267 +0000 UTC m=+3115.090999440" watchObservedRunningTime="2026-02-02 15:25:15.170009516 +0000 UTC m=+3116.814646286" Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.202548 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.420390 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="manila-share" containerID="cri-o://bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf" gracePeriod=30 Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.420955 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="probe" containerID="cri-o://cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb" gracePeriod=30 Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.432357 4869 generic.go:334] "Generic (PLEG): container finished" podID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerID="cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb" exitCode=0 Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.432665 4869 generic.go:334] "Generic (PLEG): container finished" podID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerID="bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf" exitCode=1 Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.432687 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerDied","Data":"cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb"} Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.432718 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerDied","Data":"bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf"} Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.759036 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915710 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915793 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915891 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.916047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.916094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.916125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.916812 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.917811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.923948 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts" (OuterVolumeSpecName: "scripts") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.933326 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.934174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph" (OuterVolumeSpecName: "ceph") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.943726 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg" (OuterVolumeSpecName: "kube-api-access-l5ksg") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "kube-api-access-l5ksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.992954 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018352 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018397 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018412 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018426 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018437 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018448 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018461 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.043762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data" (OuterVolumeSpecName: "config-data") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.121017 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.442098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerDied","Data":"a5dd2b6085a889dc98e2fb099d3063bc3e713c383fe9013a6e33aac2e5968482"} Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.442153 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.442164 4869 scope.go:117] "RemoveContainer" containerID="cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.463340 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:17 crc kubenswrapper[4869]: E0202 15:25:17.463688 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.464834 4869 scope.go:117] "RemoveContainer" containerID="bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.484870 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.495026 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.512799 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:17 crc kubenswrapper[4869]: E0202 15:25:17.514224 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="probe" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.514248 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="probe" Feb 02 15:25:17 crc kubenswrapper[4869]: E0202 15:25:17.514272 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="manila-share" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.514281 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="manila-share" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.514467 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="probe" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.514490 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="manila-share" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.515527 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.523178 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.525894 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.632835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-ceph\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.632929 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.632962 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-scripts\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633035 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633310 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8t6k\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-kube-api-access-t8t6k\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633390 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633525 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-scripts\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8t6k\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-kube-api-access-t8t6k\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735763 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735822 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735865 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-ceph\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735982 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.736105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.736105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.740731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-scripts\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.741496 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.741687 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.741902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-ceph\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.751835 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.756210 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8t6k\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-kube-api-access-t8t6k\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.855189 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:25:18 crc kubenswrapper[4869]: I0202 15:25:18.425498 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:18 crc kubenswrapper[4869]: I0202 15:25:18.457604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0df9e23b-1681-42de-b9d6-87c4c518d082","Type":"ContainerStarted","Data":"9c76628f582e0f3062c27e386e49bf7e716e644be538157f8c87366563b87726"} Feb 02 15:25:18 crc kubenswrapper[4869]: I0202 15:25:18.762869 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 15:25:19 crc kubenswrapper[4869]: I0202 15:25:19.477569 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" path="/var/lib/kubelet/pods/42c96e15-1507-4cd1-a8b6-382d40ff13d9/volumes" Feb 02 15:25:19 crc kubenswrapper[4869]: I0202 15:25:19.479803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0df9e23b-1681-42de-b9d6-87c4c518d082","Type":"ContainerStarted","Data":"e193a6ce6ea41c820ae9cf91823554297174f7b60f5cd098b687c4412bf810f5"} Feb 02 15:25:19 crc kubenswrapper[4869]: I0202 15:25:19.479836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0df9e23b-1681-42de-b9d6-87c4c518d082","Type":"ContainerStarted","Data":"a1f1480611f391c486bf2a8158a08c804cdc90d6393e3e92236f41953713aa73"} Feb 02 15:25:19 crc kubenswrapper[4869]: I0202 15:25:19.521685 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.521670475 podStartE2EDuration="2.521670475s" podCreationTimestamp="2026-02-02 15:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:25:19.517118934 +0000 UTC m=+3121.161755704" watchObservedRunningTime="2026-02-02 15:25:19.521670475 +0000 UTC m=+3121.166307245" Feb 02 15:25:22 crc kubenswrapper[4869]: I0202 15:25:22.558905 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Feb 02 15:25:27 crc kubenswrapper[4869]: I0202 15:25:27.856101 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.165556 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247492 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247619 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247719 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247781 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247879 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247996 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.248053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.249120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs" (OuterVolumeSpecName: "logs") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.254185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg" (OuterVolumeSpecName: "kube-api-access-vtscg") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "kube-api-access-vtscg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.254366 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.273726 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts" (OuterVolumeSpecName: "scripts") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.281599 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data" (OuterVolumeSpecName: "config-data") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.285272 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.309181 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351220 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351259 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351272 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351282 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351298 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351310 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351320 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583337 4869 generic.go:334] "Generic (PLEG): container finished" podID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerID="9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" exitCode=137 Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerDied","Data":"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813"} Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583454 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerDied","Data":"bb317fe37d1fca98ae0b5bc915c94ff30a5b109bb554ebf2814b1106d864e8a6"} Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583512 4869 scope.go:117] "RemoveContainer" containerID="1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.636509 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.647745 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.784765 4869 scope.go:117] "RemoveContainer" containerID="9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.809724 4869 scope.go:117] "RemoveContainer" containerID="1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" Feb 02 15:25:28 crc kubenswrapper[4869]: E0202 15:25:28.810334 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597\": container with ID starting with 1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597 not found: ID does not exist" containerID="1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.810400 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597"} err="failed to get container status \"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597\": rpc error: code = NotFound desc = could not find container \"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597\": container with ID starting with 1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597 not found: ID does not exist" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.810440 4869 scope.go:117] "RemoveContainer" containerID="9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" Feb 02 15:25:28 crc kubenswrapper[4869]: E0202 15:25:28.810959 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813\": container with ID starting with 9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813 not found: ID does not exist" containerID="9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.810996 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813"} err="failed to get container status \"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813\": rpc error: code = NotFound desc = could not find container \"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813\": container with ID starting with 9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813 not found: ID does not exist" Feb 02 15:25:29 crc kubenswrapper[4869]: I0202 15:25:29.470708 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:29 crc kubenswrapper[4869]: E0202 15:25:29.470973 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:25:29 crc kubenswrapper[4869]: I0202 15:25:29.474061 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" path="/var/lib/kubelet/pods/74249215-4cd6-45b3-b2ab-6aa245e963f2/volumes" Feb 02 15:25:30 crc kubenswrapper[4869]: I0202 15:25:30.338192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 15:25:35 crc kubenswrapper[4869]: I0202 15:25:35.739013 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 15:25:39 crc kubenswrapper[4869]: I0202 15:25:39.365511 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 15:25:42 crc kubenswrapper[4869]: I0202 15:25:42.463314 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:42 crc kubenswrapper[4869]: E0202 15:25:42.464065 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:25:54 crc kubenswrapper[4869]: I0202 15:25:54.463023 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:54 crc kubenswrapper[4869]: E0202 15:25:54.463701 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:26:09 crc kubenswrapper[4869]: I0202 15:26:09.476367 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:26:09 crc kubenswrapper[4869]: E0202 15:26:09.486536 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:26:22 crc kubenswrapper[4869]: I0202 15:26:22.463344 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:26:22 crc kubenswrapper[4869]: E0202 15:26:22.464190 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:26:37 crc kubenswrapper[4869]: I0202 15:26:37.463132 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:26:37 crc kubenswrapper[4869]: E0202 15:26:37.463963 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.927375 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 15:26:42 crc kubenswrapper[4869]: E0202 15:26:42.928409 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon-log" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.928427 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon-log" Feb 02 15:26:42 crc kubenswrapper[4869]: E0202 15:26:42.928453 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.928462 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.928700 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.928723 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon-log" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.929529 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.931965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-72k4z" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.932642 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.932836 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.934829 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.952889 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101322 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101661 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101806 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101959 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204181 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204227 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204417 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204830 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.205448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.205577 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.205723 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.205740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.211066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.214639 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.214707 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.224173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.241181 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.267454 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.709060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.716126 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:26:44 crc kubenswrapper[4869]: I0202 15:26:44.341179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccbb21f-23d9-48be-a212-547e064326f6","Type":"ContainerStarted","Data":"c08d2dd97b8a58de7b4399802e9fdd669c46ddb7f1d0f2a64a4f17afc41bb15d"} Feb 02 15:26:49 crc kubenswrapper[4869]: I0202 15:26:49.470617 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:26:49 crc kubenswrapper[4869]: E0202 15:26:49.471559 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:27:01 crc kubenswrapper[4869]: I0202 15:27:01.462362 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:27:01 crc kubenswrapper[4869]: E0202 15:27:01.463295 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:27:16 crc kubenswrapper[4869]: I0202 15:27:16.462514 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:27:17 crc kubenswrapper[4869]: E0202 15:27:17.074898 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 02 15:27:17 crc kubenswrapper[4869]: E0202 15:27:17.075338 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh7qj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(1ccbb21f-23d9-48be-a212-547e064326f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 15:27:17 crc kubenswrapper[4869]: E0202 15:27:17.076566 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" Feb 02 15:27:17 crc kubenswrapper[4869]: I0202 15:27:17.674550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3"} Feb 02 15:27:17 crc kubenswrapper[4869]: E0202 15:27:17.678042 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" Feb 02 15:27:31 crc kubenswrapper[4869]: I0202 15:27:31.959474 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 15:27:33 crc kubenswrapper[4869]: I0202 15:27:33.864207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccbb21f-23d9-48be-a212-547e064326f6","Type":"ContainerStarted","Data":"ac9a60d8c10f53a0410a3a801abad85986e73c2832d375d41caefea008863171"} Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.022800 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.782522502 podStartE2EDuration="53.022746138s" podCreationTimestamp="2026-02-02 15:26:41 +0000 UTC" firstStartedPulling="2026-02-02 15:26:43.715852606 +0000 UTC m=+3205.360489376" lastFinishedPulling="2026-02-02 15:27:31.956076232 +0000 UTC m=+3253.600713012" observedRunningTime="2026-02-02 15:27:33.884777272 +0000 UTC m=+3255.529414082" watchObservedRunningTime="2026-02-02 15:27:34.022746138 +0000 UTC m=+3255.667382948" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.033783 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.038434 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.052739 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.181088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.181142 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.181250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.282996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.283048 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.283131 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.283567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.283592 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.304849 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.403995 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.869312 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:27:35 crc kubenswrapper[4869]: I0202 15:27:35.883674 4869 generic.go:334] "Generic (PLEG): container finished" podID="994000fc-8ba9-47d0-a120-3283878441d5" containerID="c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2" exitCode=0 Feb 02 15:27:35 crc kubenswrapper[4869]: I0202 15:27:35.883715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerDied","Data":"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2"} Feb 02 15:27:35 crc kubenswrapper[4869]: I0202 15:27:35.883986 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerStarted","Data":"b588669e13cddff568ae7057846a90811cd14fb59157179225e707a0db9a55e1"} Feb 02 15:27:36 crc kubenswrapper[4869]: I0202 15:27:36.894980 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerStarted","Data":"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1"} Feb 02 15:27:39 crc kubenswrapper[4869]: I0202 15:27:39.923181 4869 generic.go:334] "Generic (PLEG): container finished" podID="994000fc-8ba9-47d0-a120-3283878441d5" containerID="49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1" exitCode=0 Feb 02 15:27:39 crc kubenswrapper[4869]: I0202 15:27:39.923226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerDied","Data":"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1"} Feb 02 15:27:40 crc kubenswrapper[4869]: I0202 15:27:40.936345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerStarted","Data":"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a"} Feb 02 15:27:40 crc kubenswrapper[4869]: I0202 15:27:40.962693 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t626s" podStartSLOduration=3.433605078 podStartE2EDuration="7.962666506s" podCreationTimestamp="2026-02-02 15:27:33 +0000 UTC" firstStartedPulling="2026-02-02 15:27:35.88614264 +0000 UTC m=+3257.530779410" lastFinishedPulling="2026-02-02 15:27:40.415204068 +0000 UTC m=+3262.059840838" observedRunningTime="2026-02-02 15:27:40.957679584 +0000 UTC m=+3262.602316364" watchObservedRunningTime="2026-02-02 15:27:40.962666506 +0000 UTC m=+3262.607303286" Feb 02 15:27:44 crc kubenswrapper[4869]: I0202 15:27:44.405093 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:44 crc kubenswrapper[4869]: I0202 15:27:44.405651 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:45 crc kubenswrapper[4869]: I0202 15:27:45.453758 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t626s" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" probeResult="failure" output=< Feb 02 15:27:45 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:27:45 crc kubenswrapper[4869]: > Feb 02 15:27:55 crc kubenswrapper[4869]: I0202 15:27:55.457000 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t626s" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" probeResult="failure" output=< Feb 02 15:27:55 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:27:55 crc kubenswrapper[4869]: > Feb 02 15:28:04 crc kubenswrapper[4869]: I0202 15:28:04.449517 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:28:04 crc kubenswrapper[4869]: I0202 15:28:04.513972 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:28:05 crc kubenswrapper[4869]: I0202 15:28:05.225783 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.200433 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t626s" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" containerID="cri-o://1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" gracePeriod=2 Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.655130 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.741427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") pod \"994000fc-8ba9-47d0-a120-3283878441d5\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.741814 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") pod \"994000fc-8ba9-47d0-a120-3283878441d5\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.741973 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") pod \"994000fc-8ba9-47d0-a120-3283878441d5\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.742970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities" (OuterVolumeSpecName: "utilities") pod "994000fc-8ba9-47d0-a120-3283878441d5" (UID: "994000fc-8ba9-47d0-a120-3283878441d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.748706 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6" (OuterVolumeSpecName: "kube-api-access-h5kr6") pod "994000fc-8ba9-47d0-a120-3283878441d5" (UID: "994000fc-8ba9-47d0-a120-3283878441d5"). InnerVolumeSpecName "kube-api-access-h5kr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.844197 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") on node \"crc\" DevicePath \"\"" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.844240 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.868252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "994000fc-8ba9-47d0-a120-3283878441d5" (UID: "994000fc-8ba9-47d0-a120-3283878441d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.947085 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216236 4869 generic.go:334] "Generic (PLEG): container finished" podID="994000fc-8ba9-47d0-a120-3283878441d5" containerID="1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" exitCode=0 Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerDied","Data":"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a"} Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216328 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerDied","Data":"b588669e13cddff568ae7057846a90811cd14fb59157179225e707a0db9a55e1"} Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216388 4869 scope.go:117] "RemoveContainer" containerID="1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.247932 4869 scope.go:117] "RemoveContainer" containerID="49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.281042 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.300822 4869 scope.go:117] "RemoveContainer" containerID="c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.314628 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.358430 4869 scope.go:117] "RemoveContainer" containerID="1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" Feb 02 15:28:07 crc kubenswrapper[4869]: E0202 15:28:07.358935 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a\": container with ID starting with 1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a not found: ID does not exist" containerID="1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.358966 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a"} err="failed to get container status \"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a\": rpc error: code = NotFound desc = could not find container \"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a\": container with ID starting with 1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a not found: ID does not exist" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.358989 4869 scope.go:117] "RemoveContainer" containerID="49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1" Feb 02 15:28:07 crc kubenswrapper[4869]: E0202 15:28:07.359989 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1\": container with ID starting with 49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1 not found: ID does not exist" containerID="49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.360043 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1"} err="failed to get container status \"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1\": rpc error: code = NotFound desc = could not find container \"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1\": container with ID starting with 49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1 not found: ID does not exist" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.360078 4869 scope.go:117] "RemoveContainer" containerID="c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2" Feb 02 15:28:07 crc kubenswrapper[4869]: E0202 15:28:07.360446 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2\": container with ID starting with c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2 not found: ID does not exist" containerID="c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.360486 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2"} err="failed to get container status \"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2\": rpc error: code = NotFound desc = could not find container \"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2\": container with ID starting with c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2 not found: ID does not exist" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.475937 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994000fc-8ba9-47d0-a120-3283878441d5" path="/var/lib/kubelet/pods/994000fc-8ba9-47d0-a120-3283878441d5/volumes" Feb 02 15:29:45 crc kubenswrapper[4869]: I0202 15:29:45.303678 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:29:45 crc kubenswrapper[4869]: I0202 15:29:45.304317 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.158067 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr"] Feb 02 15:30:00 crc kubenswrapper[4869]: E0202 15:30:00.159054 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="extract-utilities" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.159071 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="extract-utilities" Feb 02 15:30:00 crc kubenswrapper[4869]: E0202 15:30:00.159100 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.159109 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" Feb 02 15:30:00 crc kubenswrapper[4869]: E0202 15:30:00.159119 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="extract-content" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.159128 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="extract-content" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.159357 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.160279 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.162467 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.162663 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.168304 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr"] Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.249444 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.249536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.249676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.351334 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.351417 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.351536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.352440 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.360743 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.373773 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.495634 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.960598 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr"] Feb 02 15:30:01 crc kubenswrapper[4869]: I0202 15:30:01.302302 4869 generic.go:334] "Generic (PLEG): container finished" podID="869f1f5c-3365-4b92-8459-76f5a3a9611f" containerID="649f59c8bcd1fef60a0e269541fe8492287d8caf17da4acdbee1c9eb014035eb" exitCode=0 Feb 02 15:30:01 crc kubenswrapper[4869]: I0202 15:30:01.302405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" event={"ID":"869f1f5c-3365-4b92-8459-76f5a3a9611f","Type":"ContainerDied","Data":"649f59c8bcd1fef60a0e269541fe8492287d8caf17da4acdbee1c9eb014035eb"} Feb 02 15:30:01 crc kubenswrapper[4869]: I0202 15:30:01.302716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" event={"ID":"869f1f5c-3365-4b92-8459-76f5a3a9611f","Type":"ContainerStarted","Data":"33d683832c70e6846d1828ccb1ccb48cffbd7b101d8ecc2f6e1707278aaf3017"} Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.792495 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.908248 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") pod \"869f1f5c-3365-4b92-8459-76f5a3a9611f\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.908839 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") pod \"869f1f5c-3365-4b92-8459-76f5a3a9611f\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.909019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") pod \"869f1f5c-3365-4b92-8459-76f5a3a9611f\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.909033 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume" (OuterVolumeSpecName: "config-volume") pod "869f1f5c-3365-4b92-8459-76f5a3a9611f" (UID: "869f1f5c-3365-4b92-8459-76f5a3a9611f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.909577 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.914687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4" (OuterVolumeSpecName: "kube-api-access-6c7s4") pod "869f1f5c-3365-4b92-8459-76f5a3a9611f" (UID: "869f1f5c-3365-4b92-8459-76f5a3a9611f"). InnerVolumeSpecName "kube-api-access-6c7s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.918123 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "869f1f5c-3365-4b92-8459-76f5a3a9611f" (UID: "869f1f5c-3365-4b92-8459-76f5a3a9611f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.011963 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.011995 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") on node \"crc\" DevicePath \"\"" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.323099 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" event={"ID":"869f1f5c-3365-4b92-8459-76f5a3a9611f","Type":"ContainerDied","Data":"33d683832c70e6846d1828ccb1ccb48cffbd7b101d8ecc2f6e1707278aaf3017"} Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.323148 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d683832c70e6846d1828ccb1ccb48cffbd7b101d8ecc2f6e1707278aaf3017" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.323183 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.882662 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.907567 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 15:30:05 crc kubenswrapper[4869]: I0202 15:30:05.479020 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" path="/var/lib/kubelet/pods/f4a6eca8-9d17-4791-add2-36c7119da5a5/volumes" Feb 02 15:30:15 crc kubenswrapper[4869]: I0202 15:30:15.304189 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:30:15 crc kubenswrapper[4869]: I0202 15:30:15.304782 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:30:31 crc kubenswrapper[4869]: I0202 15:30:31.164137 4869 scope.go:117] "RemoveContainer" containerID="28b9935993b50888d9171d31e34b1e8a7654cd4a7e60abd6660f4755c8d99b31" Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.304712 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.305160 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.305287 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.306047 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.306115 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3" gracePeriod=600 Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.713833 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3" exitCode=0 Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.713917 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3"} Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.714262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580"} Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.714293 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.316316 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:06 crc kubenswrapper[4869]: E0202 15:31:06.317661 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869f1f5c-3365-4b92-8459-76f5a3a9611f" containerName="collect-profiles" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.317685 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="869f1f5c-3365-4b92-8459-76f5a3a9611f" containerName="collect-profiles" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.318262 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="869f1f5c-3365-4b92-8459-76f5a3a9611f" containerName="collect-profiles" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.321043 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.331139 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.422479 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.422587 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.422637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525029 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525113 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525255 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.546187 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.670521 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:07 crc kubenswrapper[4869]: I0202 15:31:07.221743 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:07 crc kubenswrapper[4869]: I0202 15:31:07.915359 4869 generic.go:334] "Generic (PLEG): container finished" podID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerID="843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15" exitCode=0 Feb 02 15:31:07 crc kubenswrapper[4869]: I0202 15:31:07.915401 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerDied","Data":"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15"} Feb 02 15:31:07 crc kubenswrapper[4869]: I0202 15:31:07.915683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerStarted","Data":"277152112f332f87ac8340ae43964e01f486c32a7b4f6924bbdad83677a450a2"} Feb 02 15:31:09 crc kubenswrapper[4869]: I0202 15:31:09.939787 4869 generic.go:334] "Generic (PLEG): container finished" podID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerID="266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e" exitCode=0 Feb 02 15:31:09 crc kubenswrapper[4869]: I0202 15:31:09.939863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerDied","Data":"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e"} Feb 02 15:31:10 crc kubenswrapper[4869]: I0202 15:31:10.955974 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerStarted","Data":"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33"} Feb 02 15:31:10 crc kubenswrapper[4869]: I0202 15:31:10.987889 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5q84c" podStartSLOduration=2.438247945 podStartE2EDuration="4.987861201s" podCreationTimestamp="2026-02-02 15:31:06 +0000 UTC" firstStartedPulling="2026-02-02 15:31:07.917097622 +0000 UTC m=+3469.561734392" lastFinishedPulling="2026-02-02 15:31:10.466710878 +0000 UTC m=+3472.111347648" observedRunningTime="2026-02-02 15:31:10.974654999 +0000 UTC m=+3472.619291809" watchObservedRunningTime="2026-02-02 15:31:10.987861201 +0000 UTC m=+3472.632497971" Feb 02 15:31:16 crc kubenswrapper[4869]: I0202 15:31:16.671411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:16 crc kubenswrapper[4869]: I0202 15:31:16.672765 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:16 crc kubenswrapper[4869]: I0202 15:31:16.733283 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:17 crc kubenswrapper[4869]: I0202 15:31:17.090548 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:17 crc kubenswrapper[4869]: I0202 15:31:17.489703 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.050497 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5q84c" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="registry-server" containerID="cri-o://d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" gracePeriod=2 Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.650328 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.834158 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") pod \"41426242-9734-4a7d-a77f-3d0b2ef6b467\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.834561 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") pod \"41426242-9734-4a7d-a77f-3d0b2ef6b467\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.834658 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") pod \"41426242-9734-4a7d-a77f-3d0b2ef6b467\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.835271 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities" (OuterVolumeSpecName: "utilities") pod "41426242-9734-4a7d-a77f-3d0b2ef6b467" (UID: "41426242-9734-4a7d-a77f-3d0b2ef6b467"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.843274 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc" (OuterVolumeSpecName: "kube-api-access-65cvc") pod "41426242-9734-4a7d-a77f-3d0b2ef6b467" (UID: "41426242-9734-4a7d-a77f-3d0b2ef6b467"). InnerVolumeSpecName "kube-api-access-65cvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.880659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41426242-9734-4a7d-a77f-3d0b2ef6b467" (UID: "41426242-9734-4a7d-a77f-3d0b2ef6b467"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.936959 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.937005 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.937020 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061824 4869 generic.go:334] "Generic (PLEG): container finished" podID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerID="d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" exitCode=0 Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerDied","Data":"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33"} Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061884 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerDied","Data":"277152112f332f87ac8340ae43964e01f486c32a7b4f6924bbdad83677a450a2"} Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061938 4869 scope.go:117] "RemoveContainer" containerID="d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.086405 4869 scope.go:117] "RemoveContainer" containerID="266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.102790 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.111225 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.127273 4869 scope.go:117] "RemoveContainer" containerID="843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.176838 4869 scope.go:117] "RemoveContainer" containerID="d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" Feb 02 15:31:20 crc kubenswrapper[4869]: E0202 15:31:20.177719 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33\": container with ID starting with d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33 not found: ID does not exist" containerID="d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.177763 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33"} err="failed to get container status \"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33\": rpc error: code = NotFound desc = could not find container \"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33\": container with ID starting with d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33 not found: ID does not exist" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.177785 4869 scope.go:117] "RemoveContainer" containerID="266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e" Feb 02 15:31:20 crc kubenswrapper[4869]: E0202 15:31:20.178044 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e\": container with ID starting with 266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e not found: ID does not exist" containerID="266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.178067 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e"} err="failed to get container status \"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e\": rpc error: code = NotFound desc = could not find container \"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e\": container with ID starting with 266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e not found: ID does not exist" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.178083 4869 scope.go:117] "RemoveContainer" containerID="843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15" Feb 02 15:31:20 crc kubenswrapper[4869]: E0202 15:31:20.178271 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15\": container with ID starting with 843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15 not found: ID does not exist" containerID="843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.178290 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15"} err="failed to get container status \"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15\": rpc error: code = NotFound desc = could not find container \"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15\": container with ID starting with 843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15 not found: ID does not exist" Feb 02 15:31:21 crc kubenswrapper[4869]: I0202 15:31:21.474754 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" path="/var/lib/kubelet/pods/41426242-9734-4a7d-a77f-3d0b2ef6b467/volumes" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.286612 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:28 crc kubenswrapper[4869]: E0202 15:31:28.287449 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="extract-content" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.287461 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="extract-content" Feb 02 15:31:28 crc kubenswrapper[4869]: E0202 15:31:28.287473 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="extract-utilities" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.287483 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="extract-utilities" Feb 02 15:31:28 crc kubenswrapper[4869]: E0202 15:31:28.287543 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="registry-server" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.287549 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="registry-server" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.287714 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="registry-server" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.289024 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.299916 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.426932 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.427375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.427582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.529386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.529504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.529561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.529991 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.530072 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.551111 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.624955 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:29 crc kubenswrapper[4869]: I0202 15:31:29.216699 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:29 crc kubenswrapper[4869]: W0202 15:31:29.232365 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ea1ffb_7462_485c_855c_ae3a5742ea5c.slice/crio-884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86 WatchSource:0}: Error finding container 884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86: Status 404 returned error can't find the container with id 884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86 Feb 02 15:31:30 crc kubenswrapper[4869]: I0202 15:31:30.150127 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerID="fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82" exitCode=0 Feb 02 15:31:30 crc kubenswrapper[4869]: I0202 15:31:30.151081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerDied","Data":"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82"} Feb 02 15:31:30 crc kubenswrapper[4869]: I0202 15:31:30.151139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerStarted","Data":"884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86"} Feb 02 15:31:32 crc kubenswrapper[4869]: I0202 15:31:32.170684 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerID="ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1" exitCode=0 Feb 02 15:31:32 crc kubenswrapper[4869]: I0202 15:31:32.170819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerDied","Data":"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1"} Feb 02 15:31:33 crc kubenswrapper[4869]: I0202 15:31:33.182555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerStarted","Data":"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9"} Feb 02 15:31:38 crc kubenswrapper[4869]: I0202 15:31:38.625717 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:38 crc kubenswrapper[4869]: I0202 15:31:38.626089 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:38 crc kubenswrapper[4869]: I0202 15:31:38.673934 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:38 crc kubenswrapper[4869]: I0202 15:31:38.721457 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wpwnv" podStartSLOduration=8.293728815 podStartE2EDuration="10.721441027s" podCreationTimestamp="2026-02-02 15:31:28 +0000 UTC" firstStartedPulling="2026-02-02 15:31:30.153585219 +0000 UTC m=+3491.798221989" lastFinishedPulling="2026-02-02 15:31:32.581297431 +0000 UTC m=+3494.225934201" observedRunningTime="2026-02-02 15:31:33.201947411 +0000 UTC m=+3494.846584191" watchObservedRunningTime="2026-02-02 15:31:38.721441027 +0000 UTC m=+3500.366077797" Feb 02 15:31:39 crc kubenswrapper[4869]: I0202 15:31:39.291100 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:40 crc kubenswrapper[4869]: I0202 15:31:40.686048 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:41 crc kubenswrapper[4869]: I0202 15:31:41.261891 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wpwnv" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="registry-server" containerID="cri-o://add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" gracePeriod=2 Feb 02 15:31:41 crc kubenswrapper[4869]: I0202 15:31:41.895706 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.028646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") pod \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.028777 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") pod \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.029700 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities" (OuterVolumeSpecName: "utilities") pod "d6ea1ffb-7462-485c-855c-ae3a5742ea5c" (UID: "d6ea1ffb-7462-485c-855c-ae3a5742ea5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.029757 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") pod \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.030542 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.036127 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk" (OuterVolumeSpecName: "kube-api-access-mzppk") pod "d6ea1ffb-7462-485c-855c-ae3a5742ea5c" (UID: "d6ea1ffb-7462-485c-855c-ae3a5742ea5c"). InnerVolumeSpecName "kube-api-access-mzppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.084352 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6ea1ffb-7462-485c-855c-ae3a5742ea5c" (UID: "d6ea1ffb-7462-485c-855c-ae3a5742ea5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.133318 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.133376 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272384 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerID="add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" exitCode=0 Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerDied","Data":"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9"} Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272729 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerDied","Data":"884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86"} Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272753 4869 scope.go:117] "RemoveContainer" containerID="add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272498 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.299966 4869 scope.go:117] "RemoveContainer" containerID="ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.317170 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.329671 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.332749 4869 scope.go:117] "RemoveContainer" containerID="fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.385092 4869 scope.go:117] "RemoveContainer" containerID="add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" Feb 02 15:31:42 crc kubenswrapper[4869]: E0202 15:31:42.385771 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9\": container with ID starting with add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9 not found: ID does not exist" containerID="add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.385801 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9"} err="failed to get container status \"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9\": rpc error: code = NotFound desc = could not find container \"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9\": container with ID starting with add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9 not found: ID does not exist" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.385835 4869 scope.go:117] "RemoveContainer" containerID="ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1" Feb 02 15:31:42 crc kubenswrapper[4869]: E0202 15:31:42.386231 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1\": container with ID starting with ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1 not found: ID does not exist" containerID="ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.386251 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1"} err="failed to get container status \"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1\": rpc error: code = NotFound desc = could not find container \"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1\": container with ID starting with ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1 not found: ID does not exist" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.386264 4869 scope.go:117] "RemoveContainer" containerID="fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82" Feb 02 15:31:42 crc kubenswrapper[4869]: E0202 15:31:42.386761 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82\": container with ID starting with fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82 not found: ID does not exist" containerID="fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.386839 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82"} err="failed to get container status \"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82\": rpc error: code = NotFound desc = could not find container \"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82\": container with ID starting with fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82 not found: ID does not exist" Feb 02 15:31:43 crc kubenswrapper[4869]: I0202 15:31:43.477482 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" path="/var/lib/kubelet/pods/d6ea1ffb-7462-485c-855c-ae3a5742ea5c/volumes" Feb 02 15:32:45 crc kubenswrapper[4869]: I0202 15:32:45.305070 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:32:45 crc kubenswrapper[4869]: I0202 15:32:45.306198 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:33:15 crc kubenswrapper[4869]: I0202 15:33:15.304484 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:33:15 crc kubenswrapper[4869]: I0202 15:33:15.305098 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.304858 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.306176 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.306229 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.307063 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.307134 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" gracePeriod=600 Feb 02 15:33:45 crc kubenswrapper[4869]: E0202 15:33:45.426818 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:33:46 crc kubenswrapper[4869]: I0202 15:33:46.422304 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" exitCode=0 Feb 02 15:33:46 crc kubenswrapper[4869]: I0202 15:33:46.422403 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580"} Feb 02 15:33:46 crc kubenswrapper[4869]: I0202 15:33:46.423485 4869 scope.go:117] "RemoveContainer" containerID="63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3" Feb 02 15:33:46 crc kubenswrapper[4869]: I0202 15:33:46.424326 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:33:46 crc kubenswrapper[4869]: E0202 15:33:46.425072 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:33:58 crc kubenswrapper[4869]: I0202 15:33:58.462425 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:33:58 crc kubenswrapper[4869]: E0202 15:33:58.463363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.051032 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.062478 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.078115 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.088024 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.464104 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:34:13 crc kubenswrapper[4869]: E0202 15:34:13.464485 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.477185 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b666475-dc9a-41e9-b087-b2042c2dd80f" path="/var/lib/kubelet/pods/5b666475-dc9a-41e9-b087-b2042c2dd80f/volumes" Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.483600 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" path="/var/lib/kubelet/pods/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607/volumes" Feb 02 15:34:28 crc kubenswrapper[4869]: I0202 15:34:28.462478 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:34:28 crc kubenswrapper[4869]: E0202 15:34:28.463229 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:31 crc kubenswrapper[4869]: I0202 15:34:31.350118 4869 scope.go:117] "RemoveContainer" containerID="f6a65d674c18b4d91e1a4a5378741c663bb46842c68ee5b840ab49a144aef022" Feb 02 15:34:31 crc kubenswrapper[4869]: I0202 15:34:31.452844 4869 scope.go:117] "RemoveContainer" containerID="e2b3a08d13bb54ca12a353c801a13c65fca6c0e6e63916392001244a909d1156" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.463825 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:34:40 crc kubenswrapper[4869]: E0202 15:34:40.464563 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.887135 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:40 crc kubenswrapper[4869]: E0202 15:34:40.887891 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="extract-content" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.887915 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="extract-content" Feb 02 15:34:40 crc kubenswrapper[4869]: E0202 15:34:40.887965 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="extract-utilities" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.887972 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="extract-utilities" Feb 02 15:34:40 crc kubenswrapper[4869]: E0202 15:34:40.887988 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="registry-server" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.887994 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="registry-server" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.888168 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="registry-server" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.889456 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.918229 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.962140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.962238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.962265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.063861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.063927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.064126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.064368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.064474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.086874 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.222757 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.799601 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:41 crc kubenswrapper[4869]: W0202 15:34:41.811461 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb04306a_2210_4490_b163_3d8914b6478a.slice/crio-356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5 WatchSource:0}: Error finding container 356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5: Status 404 returned error can't find the container with id 356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5 Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.953870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerStarted","Data":"356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5"} Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.042515 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.055299 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.969166 4869 generic.go:334] "Generic (PLEG): container finished" podID="bb04306a-2210-4490-b163-3d8914b6478a" containerID="ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9" exitCode=0 Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.969316 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerDied","Data":"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9"} Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.971819 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:34:43 crc kubenswrapper[4869]: I0202 15:34:43.473969 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" path="/var/lib/kubelet/pods/d8b453d3-88d6-4fd5-bedc-62e0d4270f20/volumes" Feb 02 15:34:44 crc kubenswrapper[4869]: I0202 15:34:44.992444 4869 generic.go:334] "Generic (PLEG): container finished" podID="bb04306a-2210-4490-b163-3d8914b6478a" containerID="1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b" exitCode=0 Feb 02 15:34:44 crc kubenswrapper[4869]: I0202 15:34:44.992519 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerDied","Data":"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b"} Feb 02 15:34:46 crc kubenswrapper[4869]: I0202 15:34:46.007612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerStarted","Data":"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0"} Feb 02 15:34:46 crc kubenswrapper[4869]: I0202 15:34:46.040948 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vj5c5" podStartSLOduration=3.552986741 podStartE2EDuration="6.040904987s" podCreationTimestamp="2026-02-02 15:34:40 +0000 UTC" firstStartedPulling="2026-02-02 15:34:42.971618464 +0000 UTC m=+3684.616255234" lastFinishedPulling="2026-02-02 15:34:45.45953671 +0000 UTC m=+3687.104173480" observedRunningTime="2026-02-02 15:34:46.029383856 +0000 UTC m=+3687.674020646" watchObservedRunningTime="2026-02-02 15:34:46.040904987 +0000 UTC m=+3687.685541757" Feb 02 15:34:51 crc kubenswrapper[4869]: I0202 15:34:51.223417 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:51 crc kubenswrapper[4869]: I0202 15:34:51.224824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:51 crc kubenswrapper[4869]: I0202 15:34:51.277271 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:52 crc kubenswrapper[4869]: I0202 15:34:52.101544 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:52 crc kubenswrapper[4869]: I0202 15:34:52.148720 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:52 crc kubenswrapper[4869]: I0202 15:34:52.462607 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:34:52 crc kubenswrapper[4869]: E0202 15:34:52.462959 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.076833 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vj5c5" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="registry-server" containerID="cri-o://b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" gracePeriod=2 Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.796572 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.857228 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") pod \"bb04306a-2210-4490-b163-3d8914b6478a\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.857363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") pod \"bb04306a-2210-4490-b163-3d8914b6478a\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.857655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") pod \"bb04306a-2210-4490-b163-3d8914b6478a\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.858458 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities" (OuterVolumeSpecName: "utilities") pod "bb04306a-2210-4490-b163-3d8914b6478a" (UID: "bb04306a-2210-4490-b163-3d8914b6478a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.869380 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p" (OuterVolumeSpecName: "kube-api-access-zmc4p") pod "bb04306a-2210-4490-b163-3d8914b6478a" (UID: "bb04306a-2210-4490-b163-3d8914b6478a"). InnerVolumeSpecName "kube-api-access-zmc4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.902926 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb04306a-2210-4490-b163-3d8914b6478a" (UID: "bb04306a-2210-4490-b163-3d8914b6478a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.961150 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.961223 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") on node \"crc\" DevicePath \"\"" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.961233 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087083 4869 generic.go:334] "Generic (PLEG): container finished" podID="bb04306a-2210-4490-b163-3d8914b6478a" containerID="b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" exitCode=0 Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerDied","Data":"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0"} Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerDied","Data":"356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5"} Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087178 4869 scope.go:117] "RemoveContainer" containerID="b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087208 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.115948 4869 scope.go:117] "RemoveContainer" containerID="1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.122555 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.135615 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.172992 4869 scope.go:117] "RemoveContainer" containerID="ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.196054 4869 scope.go:117] "RemoveContainer" containerID="b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" Feb 02 15:34:55 crc kubenswrapper[4869]: E0202 15:34:55.196685 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0\": container with ID starting with b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0 not found: ID does not exist" containerID="b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.196747 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0"} err="failed to get container status \"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0\": rpc error: code = NotFound desc = could not find container \"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0\": container with ID starting with b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0 not found: ID does not exist" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.196785 4869 scope.go:117] "RemoveContainer" containerID="1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b" Feb 02 15:34:55 crc kubenswrapper[4869]: E0202 15:34:55.197238 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b\": container with ID starting with 1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b not found: ID does not exist" containerID="1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.197439 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b"} err="failed to get container status \"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b\": rpc error: code = NotFound desc = could not find container \"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b\": container with ID starting with 1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b not found: ID does not exist" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.197559 4869 scope.go:117] "RemoveContainer" containerID="ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9" Feb 02 15:34:55 crc kubenswrapper[4869]: E0202 15:34:55.198076 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9\": container with ID starting with ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9 not found: ID does not exist" containerID="ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.198113 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9"} err="failed to get container status \"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9\": rpc error: code = NotFound desc = could not find container \"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9\": container with ID starting with ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9 not found: ID does not exist" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.475213 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb04306a-2210-4490-b163-3d8914b6478a" path="/var/lib/kubelet/pods/bb04306a-2210-4490-b163-3d8914b6478a/volumes" Feb 02 15:35:03 crc kubenswrapper[4869]: I0202 15:35:03.462705 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:03 crc kubenswrapper[4869]: E0202 15:35:03.463384 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:35:15 crc kubenswrapper[4869]: I0202 15:35:15.464028 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:15 crc kubenswrapper[4869]: E0202 15:35:15.464671 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:35:28 crc kubenswrapper[4869]: I0202 15:35:28.463030 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:28 crc kubenswrapper[4869]: E0202 15:35:28.463848 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:35:31 crc kubenswrapper[4869]: I0202 15:35:31.572286 4869 scope.go:117] "RemoveContainer" containerID="5948d840f279d95c368e5ad5e8fcf13a024cb24a66d211ff6dee2d8bb1e46f72" Feb 02 15:35:43 crc kubenswrapper[4869]: I0202 15:35:43.462774 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:43 crc kubenswrapper[4869]: E0202 15:35:43.463811 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:35:56 crc kubenswrapper[4869]: I0202 15:35:56.462362 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:56 crc kubenswrapper[4869]: E0202 15:35:56.463323 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:36:08 crc kubenswrapper[4869]: I0202 15:36:08.462700 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:36:08 crc kubenswrapper[4869]: E0202 15:36:08.463989 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:36:21 crc kubenswrapper[4869]: I0202 15:36:21.462701 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:36:21 crc kubenswrapper[4869]: E0202 15:36:21.463538 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:36:36 crc kubenswrapper[4869]: I0202 15:36:36.462931 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:36:36 crc kubenswrapper[4869]: E0202 15:36:36.463756 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:36:50 crc kubenswrapper[4869]: I0202 15:36:50.462759 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:36:50 crc kubenswrapper[4869]: E0202 15:36:50.463783 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:03 crc kubenswrapper[4869]: I0202 15:37:03.463388 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:03 crc kubenswrapper[4869]: E0202 15:37:03.464333 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:14 crc kubenswrapper[4869]: I0202 15:37:14.462735 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:14 crc kubenswrapper[4869]: E0202 15:37:14.463551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:25 crc kubenswrapper[4869]: I0202 15:37:25.463858 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:25 crc kubenswrapper[4869]: E0202 15:37:25.464781 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:38 crc kubenswrapper[4869]: I0202 15:37:38.463779 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:38 crc kubenswrapper[4869]: E0202 15:37:38.469969 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:53 crc kubenswrapper[4869]: I0202 15:37:53.462725 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:53 crc kubenswrapper[4869]: E0202 15:37:53.463517 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:38:08 crc kubenswrapper[4869]: I0202 15:38:08.462665 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:38:08 crc kubenswrapper[4869]: E0202 15:38:08.463479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:38:23 crc kubenswrapper[4869]: I0202 15:38:23.462902 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:38:23 crc kubenswrapper[4869]: E0202 15:38:23.464013 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:38:34 crc kubenswrapper[4869]: I0202 15:38:34.463313 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:38:34 crc kubenswrapper[4869]: E0202 15:38:34.464265 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:38:45 crc kubenswrapper[4869]: I0202 15:38:45.462585 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:38:46 crc kubenswrapper[4869]: I0202 15:38:46.077550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2"} Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.414996 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:38:54 crc kubenswrapper[4869]: E0202 15:38:54.416141 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="extract-content" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.416162 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="extract-content" Feb 02 15:38:54 crc kubenswrapper[4869]: E0202 15:38:54.416175 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="registry-server" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.416183 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="registry-server" Feb 02 15:38:54 crc kubenswrapper[4869]: E0202 15:38:54.416198 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="extract-utilities" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.416205 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="extract-utilities" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.416467 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="registry-server" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.418154 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.423443 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.473713 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.473958 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.474126 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.575859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.575930 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.576061 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.577037 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.577094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.603979 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.739815 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:55 crc kubenswrapper[4869]: I0202 15:38:55.242866 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:38:56 crc kubenswrapper[4869]: I0202 15:38:56.158342 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerID="3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66" exitCode=0 Feb 02 15:38:56 crc kubenswrapper[4869]: I0202 15:38:56.158823 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerDied","Data":"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66"} Feb 02 15:38:56 crc kubenswrapper[4869]: I0202 15:38:56.158851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerStarted","Data":"e24dbfec315c720223529ae8c9eb96fbd2221b4a094f19943a5217cff897c3dc"} Feb 02 15:38:58 crc kubenswrapper[4869]: I0202 15:38:58.176170 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerStarted","Data":"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38"} Feb 02 15:39:03 crc kubenswrapper[4869]: I0202 15:39:03.219649 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerID="ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38" exitCode=0 Feb 02 15:39:03 crc kubenswrapper[4869]: I0202 15:39:03.219684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerDied","Data":"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38"} Feb 02 15:39:04 crc kubenswrapper[4869]: I0202 15:39:04.230833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerStarted","Data":"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d"} Feb 02 15:39:04 crc kubenswrapper[4869]: I0202 15:39:04.259451 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zkwj6" podStartSLOduration=2.770356494 podStartE2EDuration="10.259432394s" podCreationTimestamp="2026-02-02 15:38:54 +0000 UTC" firstStartedPulling="2026-02-02 15:38:56.160785397 +0000 UTC m=+3937.805422167" lastFinishedPulling="2026-02-02 15:39:03.649861297 +0000 UTC m=+3945.294498067" observedRunningTime="2026-02-02 15:39:04.25150286 +0000 UTC m=+3945.896139670" watchObservedRunningTime="2026-02-02 15:39:04.259432394 +0000 UTC m=+3945.904069164" Feb 02 15:39:04 crc kubenswrapper[4869]: I0202 15:39:04.741386 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:04 crc kubenswrapper[4869]: I0202 15:39:04.741747 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:05 crc kubenswrapper[4869]: I0202 15:39:05.790495 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkwj6" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" probeResult="failure" output=< Feb 02 15:39:05 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:39:05 crc kubenswrapper[4869]: > Feb 02 15:39:14 crc kubenswrapper[4869]: I0202 15:39:14.818918 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:14 crc kubenswrapper[4869]: I0202 15:39:14.870289 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:15 crc kubenswrapper[4869]: I0202 15:39:15.055706 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:39:16 crc kubenswrapper[4869]: I0202 15:39:16.338442 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zkwj6" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" containerID="cri-o://51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" gracePeriod=2 Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.079729 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.153436 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") pod \"b5b211ab-34d9-4892-9db6-55cd96a21407\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.153579 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") pod \"b5b211ab-34d9-4892-9db6-55cd96a21407\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.153703 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") pod \"b5b211ab-34d9-4892-9db6-55cd96a21407\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.154540 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities" (OuterVolumeSpecName: "utilities") pod "b5b211ab-34d9-4892-9db6-55cd96a21407" (UID: "b5b211ab-34d9-4892-9db6-55cd96a21407"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.166064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr" (OuterVolumeSpecName: "kube-api-access-nc2gr") pod "b5b211ab-34d9-4892-9db6-55cd96a21407" (UID: "b5b211ab-34d9-4892-9db6-55cd96a21407"). InnerVolumeSpecName "kube-api-access-nc2gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.255933 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") on node \"crc\" DevicePath \"\"" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.255973 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.291367 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5b211ab-34d9-4892-9db6-55cd96a21407" (UID: "b5b211ab-34d9-4892-9db6-55cd96a21407"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348753 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerID="51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" exitCode=0 Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348808 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerDied","Data":"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d"} Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348827 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348847 4869 scope.go:117] "RemoveContainer" containerID="51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerDied","Data":"e24dbfec315c720223529ae8c9eb96fbd2221b4a094f19943a5217cff897c3dc"} Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.357869 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.386703 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.392809 4869 scope.go:117] "RemoveContainer" containerID="ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.396843 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.424440 4869 scope.go:117] "RemoveContainer" containerID="3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.484291 4869 scope.go:117] "RemoveContainer" containerID="51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" Feb 02 15:39:17 crc kubenswrapper[4869]: E0202 15:39:17.485225 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d\": container with ID starting with 51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d not found: ID does not exist" containerID="51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.485274 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d"} err="failed to get container status \"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d\": rpc error: code = NotFound desc = could not find container \"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d\": container with ID starting with 51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d not found: ID does not exist" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.485329 4869 scope.go:117] "RemoveContainer" containerID="ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38" Feb 02 15:39:17 crc kubenswrapper[4869]: E0202 15:39:17.485694 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38\": container with ID starting with ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38 not found: ID does not exist" containerID="ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.485730 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38"} err="failed to get container status \"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38\": rpc error: code = NotFound desc = could not find container \"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38\": container with ID starting with ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38 not found: ID does not exist" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.485761 4869 scope.go:117] "RemoveContainer" containerID="3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66" Feb 02 15:39:17 crc kubenswrapper[4869]: E0202 15:39:17.486140 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66\": container with ID starting with 3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66 not found: ID does not exist" containerID="3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.486166 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66"} err="failed to get container status \"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66\": rpc error: code = NotFound desc = could not find container \"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66\": container with ID starting with 3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66 not found: ID does not exist" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.488053 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" path="/var/lib/kubelet/pods/b5b211ab-34d9-4892-9db6-55cd96a21407/volumes" Feb 02 15:40:45 crc kubenswrapper[4869]: I0202 15:40:45.304696 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:40:45 crc kubenswrapper[4869]: I0202 15:40:45.305183 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:41:15 crc kubenswrapper[4869]: I0202 15:41:15.304522 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:41:15 crc kubenswrapper[4869]: I0202 15:41:15.305119 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.304787 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.305370 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.305414 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.306140 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.306187 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2" gracePeriod=600 Feb 02 15:41:46 crc kubenswrapper[4869]: I0202 15:41:46.230035 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2" exitCode=0 Feb 02 15:41:46 crc kubenswrapper[4869]: I0202 15:41:46.230527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2"} Feb 02 15:41:46 crc kubenswrapper[4869]: I0202 15:41:46.230557 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290"} Feb 02 15:41:46 crc kubenswrapper[4869]: I0202 15:41:46.230573 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:43:45 crc kubenswrapper[4869]: I0202 15:43:45.304653 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:43:45 crc kubenswrapper[4869]: I0202 15:43:45.306107 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:44:15 crc kubenswrapper[4869]: I0202 15:44:15.304059 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:44:15 crc kubenswrapper[4869]: I0202 15:44:15.304609 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.238079 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.239192 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="extract-content" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.239211 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="extract-content" Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.239231 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.239238 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.239247 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="extract-utilities" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.239259 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="extract-utilities" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.239550 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.240868 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.267493 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.290944 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.291002 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.291050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.304401 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.304454 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.304498 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.305235 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.305301 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" gracePeriod=600 Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.392444 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.392493 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.392516 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.393697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.394104 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.676954 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.677530 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.777140 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" exitCode=0 Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.777204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290"} Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.777476 4869 scope.go:117] "RemoveContainer" containerID="7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.778223 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.778522 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.862969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.406802 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.792053 4869 generic.go:334] "Generic (PLEG): container finished" podID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerID="971cc8e1afaafc554bca06e5fb085210161555600145c8cb154b8f6945d40b46" exitCode=0 Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.792148 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerDied","Data":"971cc8e1afaafc554bca06e5fb085210161555600145c8cb154b8f6945d40b46"} Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.792466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerStarted","Data":"104f15ada5cdd6ac325cc93af5fc5d927ee4037a73902017cfebc94d03582b0c"} Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.794850 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.439247 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.442003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.453946 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.533286 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.533923 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.534152 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.636836 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.637232 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.639414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.637959 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.640923 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.641769 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.642430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.649326 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.668381 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.746903 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.747021 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.747311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.765334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.808582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerStarted","Data":"7edf29f67d8af5efb924ee99bc0c2e5f8d50256221aa538ac1bf2716b1104814"} Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.848752 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.848870 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.848898 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.849292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.849561 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.881306 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.973520 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.375183 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:44:48 crc kubenswrapper[4869]: W0202 15:44:48.388968 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb731e8d9_da5b_464a_9ef0_7cf6311056d4.slice/crio-3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf WatchSource:0}: Error finding container 3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf: Status 404 returned error can't find the container with id 3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.589875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:44:48 crc kubenswrapper[4869]: W0202 15:44:48.594039 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc34a43bb_26f9_41bb_8d40_7cd30e71525d.slice/crio-589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912 WatchSource:0}: Error finding container 589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912: Status 404 returned error can't find the container with id 589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912 Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.833692 4869 generic.go:334] "Generic (PLEG): container finished" podID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerID="7edf29f67d8af5efb924ee99bc0c2e5f8d50256221aa538ac1bf2716b1104814" exitCode=0 Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.834065 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerDied","Data":"7edf29f67d8af5efb924ee99bc0c2e5f8d50256221aa538ac1bf2716b1104814"} Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.839071 4869 generic.go:334] "Generic (PLEG): container finished" podID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerID="f84224d4a0640bc9c4cadf8e36472e8fe09028de333f0ae6e883f54ed753862a" exitCode=0 Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.839191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerDied","Data":"f84224d4a0640bc9c4cadf8e36472e8fe09028de333f0ae6e883f54ed753862a"} Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.839250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerStarted","Data":"3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf"} Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.874538 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerStarted","Data":"589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912"} Feb 02 15:44:49 crc kubenswrapper[4869]: I0202 15:44:49.884898 4869 generic.go:334] "Generic (PLEG): container finished" podID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerID="310c6f14696587aa249ead65052fe71a80bf5c91456e89be6fbb2af185a52ea5" exitCode=0 Feb 02 15:44:49 crc kubenswrapper[4869]: I0202 15:44:49.884950 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerDied","Data":"310c6f14696587aa249ead65052fe71a80bf5c91456e89be6fbb2af185a52ea5"} Feb 02 15:44:49 crc kubenswrapper[4869]: I0202 15:44:49.890244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerStarted","Data":"c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a"} Feb 02 15:44:49 crc kubenswrapper[4869]: I0202 15:44:49.930837 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jdwgt" podStartSLOduration=2.3993174809999998 podStartE2EDuration="4.930814027s" podCreationTimestamp="2026-02-02 15:44:45 +0000 UTC" firstStartedPulling="2026-02-02 15:44:46.794641278 +0000 UTC m=+4288.439278048" lastFinishedPulling="2026-02-02 15:44:49.326137824 +0000 UTC m=+4290.970774594" observedRunningTime="2026-02-02 15:44:49.927010556 +0000 UTC m=+4291.571647336" watchObservedRunningTime="2026-02-02 15:44:49.930814027 +0000 UTC m=+4291.575450807" Feb 02 15:44:50 crc kubenswrapper[4869]: I0202 15:44:50.900605 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerStarted","Data":"d08c0a3edf0b9801695f5fdc1813d48952ba19117d6cdc212c92d4312afca0dd"} Feb 02 15:44:50 crc kubenswrapper[4869]: I0202 15:44:50.903853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerStarted","Data":"493fbbd96bfaccf4949c8b7a44ce71d232914c8e951d07b67328cad53f9ffdaf"} Feb 02 15:44:52 crc kubenswrapper[4869]: I0202 15:44:52.922225 4869 generic.go:334] "Generic (PLEG): container finished" podID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerID="493fbbd96bfaccf4949c8b7a44ce71d232914c8e951d07b67328cad53f9ffdaf" exitCode=0 Feb 02 15:44:52 crc kubenswrapper[4869]: I0202 15:44:52.922298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerDied","Data":"493fbbd96bfaccf4949c8b7a44ce71d232914c8e951d07b67328cad53f9ffdaf"} Feb 02 15:44:52 crc kubenswrapper[4869]: I0202 15:44:52.927227 4869 generic.go:334] "Generic (PLEG): container finished" podID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerID="d08c0a3edf0b9801695f5fdc1813d48952ba19117d6cdc212c92d4312afca0dd" exitCode=0 Feb 02 15:44:52 crc kubenswrapper[4869]: I0202 15:44:52.927281 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerDied","Data":"d08c0a3edf0b9801695f5fdc1813d48952ba19117d6cdc212c92d4312afca0dd"} Feb 02 15:44:53 crc kubenswrapper[4869]: I0202 15:44:53.944379 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerStarted","Data":"a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a"} Feb 02 15:44:53 crc kubenswrapper[4869]: I0202 15:44:53.950725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerStarted","Data":"d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383"} Feb 02 15:44:53 crc kubenswrapper[4869]: I0202 15:44:53.971615 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8f782" podStartSLOduration=3.272367858 podStartE2EDuration="6.971593756s" podCreationTimestamp="2026-02-02 15:44:47 +0000 UTC" firstStartedPulling="2026-02-02 15:44:49.886489346 +0000 UTC m=+4291.531126116" lastFinishedPulling="2026-02-02 15:44:53.585715244 +0000 UTC m=+4295.230352014" observedRunningTime="2026-02-02 15:44:53.969421014 +0000 UTC m=+4295.614057804" watchObservedRunningTime="2026-02-02 15:44:53.971593756 +0000 UTC m=+4295.616230516" Feb 02 15:44:54 crc kubenswrapper[4869]: I0202 15:44:54.007483 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vfzjr" podStartSLOduration=2.327748343 podStartE2EDuration="7.007453814s" podCreationTimestamp="2026-02-02 15:44:47 +0000 UTC" firstStartedPulling="2026-02-02 15:44:48.854928867 +0000 UTC m=+4290.499565637" lastFinishedPulling="2026-02-02 15:44:53.534634338 +0000 UTC m=+4295.179271108" observedRunningTime="2026-02-02 15:44:53.98992863 +0000 UTC m=+4295.634565400" watchObservedRunningTime="2026-02-02 15:44:54.007453814 +0000 UTC m=+4295.652090584" Feb 02 15:44:55 crc kubenswrapper[4869]: I0202 15:44:55.863333 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:55 crc kubenswrapper[4869]: I0202 15:44:55.863383 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:55 crc kubenswrapper[4869]: I0202 15:44:55.918787 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:56 crc kubenswrapper[4869]: I0202 15:44:56.034313 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.766399 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.767282 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.820514 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.974929 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.974979 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:58 crc kubenswrapper[4869]: I0202 15:44:58.032094 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:58 crc kubenswrapper[4869]: I0202 15:44:58.058075 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:58 crc kubenswrapper[4869]: I0202 15:44:58.080100 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:58 crc kubenswrapper[4869]: I0202 15:44:58.463008 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:44:58 crc kubenswrapper[4869]: E0202 15:44:58.463426 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:44:59 crc kubenswrapper[4869]: I0202 15:44:59.634617 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:44:59 crc kubenswrapper[4869]: I0202 15:44:59.635220 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jdwgt" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="registry-server" containerID="cri-o://c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a" gracePeriod=2 Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.026791 4869 generic.go:334] "Generic (PLEG): container finished" podID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerID="c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a" exitCode=0 Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.028083 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerDied","Data":"c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a"} Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.197345 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v"] Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.199969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.202141 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.222065 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.222139 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.222249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.222397 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v"] Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.229205 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.251661 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.327346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.327441 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.327556 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.328853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.336545 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.354881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.354926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.430720 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.436820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") pod \"82ffd26c-f9c6-464b-bd85-24daabb4a361\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.442350 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd" (OuterVolumeSpecName: "kube-api-access-ghqvd") pod "82ffd26c-f9c6-464b-bd85-24daabb4a361" (UID: "82ffd26c-f9c6-464b-bd85-24daabb4a361"). InnerVolumeSpecName "kube-api-access-ghqvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.540440 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") pod \"82ffd26c-f9c6-464b-bd85-24daabb4a361\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.540552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") pod \"82ffd26c-f9c6-464b-bd85-24daabb4a361\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.541235 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.542341 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities" (OuterVolumeSpecName: "utilities") pod "82ffd26c-f9c6-464b-bd85-24daabb4a361" (UID: "82ffd26c-f9c6-464b-bd85-24daabb4a361"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.570285 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82ffd26c-f9c6-464b-bd85-24daabb4a361" (UID: "82ffd26c-f9c6-464b-bd85-24daabb4a361"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.644013 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.644059 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.896382 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v"] Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.038141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" event={"ID":"0000345e-eabc-4888-acdb-00c809746e96","Type":"ContainerStarted","Data":"fe6079af63eb74c307e9b9ef6c867c7fbe4f9baf9bae3717acf0882ffd36e3bd"} Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.040343 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerDied","Data":"104f15ada5cdd6ac325cc93af5fc5d927ee4037a73902017cfebc94d03582b0c"} Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.040387 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.040422 4869 scope.go:117] "RemoveContainer" containerID="c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a" Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.040541 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vfzjr" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="registry-server" containerID="cri-o://d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383" gracePeriod=2 Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.073204 4869 scope.go:117] "RemoveContainer" containerID="7edf29f67d8af5efb924ee99bc0c2e5f8d50256221aa538ac1bf2716b1104814" Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.092725 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.102200 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.474328 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" path="/var/lib/kubelet/pods/82ffd26c-f9c6-464b-bd85-24daabb4a361/volumes" Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.492264 4869 scope.go:117] "RemoveContainer" containerID="971cc8e1afaafc554bca06e5fb085210161555600145c8cb154b8f6945d40b46" Feb 02 15:45:02 crc kubenswrapper[4869]: I0202 15:45:02.025579 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:45:02 crc kubenswrapper[4869]: I0202 15:45:02.025815 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8f782" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="registry-server" containerID="cri-o://a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a" gracePeriod=2 Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.068214 4869 generic.go:334] "Generic (PLEG): container finished" podID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerID="a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a" exitCode=0 Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.068424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerDied","Data":"a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a"} Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.070819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" event={"ID":"0000345e-eabc-4888-acdb-00c809746e96","Type":"ContainerStarted","Data":"3a4cc8364b5164f25f0a96a2c5e5007ac3dbe97a7db78fdaa9fad0c2ebcc3ea0"} Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.076392 4869 generic.go:334] "Generic (PLEG): container finished" podID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerID="d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383" exitCode=0 Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.076620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerDied","Data":"d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383"} Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.093068 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" podStartSLOduration=3.093041155 podStartE2EDuration="3.093041155s" podCreationTimestamp="2026-02-02 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:45:03.087266195 +0000 UTC m=+4304.731902975" watchObservedRunningTime="2026-02-02 15:45:03.093041155 +0000 UTC m=+4304.737677925" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.384662 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.393084 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") pod \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504417 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") pod \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") pod \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504693 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") pod \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504823 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") pod \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") pod \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.506131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities" (OuterVolumeSpecName: "utilities") pod "c34a43bb-26f9-41bb-8d40-7cd30e71525d" (UID: "c34a43bb-26f9-41bb-8d40-7cd30e71525d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.506189 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities" (OuterVolumeSpecName: "utilities") pod "b731e8d9-da5b-464a-9ef0-7cf6311056d4" (UID: "b731e8d9-da5b-464a-9ef0-7cf6311056d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.506852 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.506887 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.511834 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx" (OuterVolumeSpecName: "kube-api-access-lwblx") pod "b731e8d9-da5b-464a-9ef0-7cf6311056d4" (UID: "b731e8d9-da5b-464a-9ef0-7cf6311056d4"). InnerVolumeSpecName "kube-api-access-lwblx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.523479 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67" (OuterVolumeSpecName: "kube-api-access-r6x67") pod "c34a43bb-26f9-41bb-8d40-7cd30e71525d" (UID: "c34a43bb-26f9-41bb-8d40-7cd30e71525d"). InnerVolumeSpecName "kube-api-access-r6x67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.563343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c34a43bb-26f9-41bb-8d40-7cd30e71525d" (UID: "c34a43bb-26f9-41bb-8d40-7cd30e71525d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.567325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b731e8d9-da5b-464a-9ef0-7cf6311056d4" (UID: "b731e8d9-da5b-464a-9ef0-7cf6311056d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.608997 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.609030 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.609039 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.609048 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.088511 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerDied","Data":"3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf"} Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.088567 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.088860 4869 scope.go:117] "RemoveContainer" containerID="d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.093789 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.094142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerDied","Data":"589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912"} Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.100634 4869 generic.go:334] "Generic (PLEG): container finished" podID="0000345e-eabc-4888-acdb-00c809746e96" containerID="3a4cc8364b5164f25f0a96a2c5e5007ac3dbe97a7db78fdaa9fad0c2ebcc3ea0" exitCode=0 Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.100691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" event={"ID":"0000345e-eabc-4888-acdb-00c809746e96","Type":"ContainerDied","Data":"3a4cc8364b5164f25f0a96a2c5e5007ac3dbe97a7db78fdaa9fad0c2ebcc3ea0"} Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.117411 4869 scope.go:117] "RemoveContainer" containerID="d08c0a3edf0b9801695f5fdc1813d48952ba19117d6cdc212c92d4312afca0dd" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.157242 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.158731 4869 scope.go:117] "RemoveContainer" containerID="f84224d4a0640bc9c4cadf8e36472e8fe09028de333f0ae6e883f54ed753862a" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.166424 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.176598 4869 scope.go:117] "RemoveContainer" containerID="a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.178919 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.191139 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.241142 4869 scope.go:117] "RemoveContainer" containerID="493fbbd96bfaccf4949c8b7a44ce71d232914c8e951d07b67328cad53f9ffdaf" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.277335 4869 scope.go:117] "RemoveContainer" containerID="310c6f14696587aa249ead65052fe71a80bf5c91456e89be6fbb2af185a52ea5" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.522141 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" path="/var/lib/kubelet/pods/b731e8d9-da5b-464a-9ef0-7cf6311056d4/volumes" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.523905 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" path="/var/lib/kubelet/pods/c34a43bb-26f9-41bb-8d40-7cd30e71525d/volumes" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.682880 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.861896 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") pod \"0000345e-eabc-4888-acdb-00c809746e96\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.862102 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") pod \"0000345e-eabc-4888-acdb-00c809746e96\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.862257 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") pod \"0000345e-eabc-4888-acdb-00c809746e96\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.863478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume" (OuterVolumeSpecName: "config-volume") pod "0000345e-eabc-4888-acdb-00c809746e96" (UID: "0000345e-eabc-4888-acdb-00c809746e96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.867355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt" (OuterVolumeSpecName: "kube-api-access-gqspt") pod "0000345e-eabc-4888-acdb-00c809746e96" (UID: "0000345e-eabc-4888-acdb-00c809746e96"). InnerVolumeSpecName "kube-api-access-gqspt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.867828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0000345e-eabc-4888-acdb-00c809746e96" (UID: "0000345e-eabc-4888-acdb-00c809746e96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.964996 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.965034 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.965046 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.123438 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" event={"ID":"0000345e-eabc-4888-acdb-00c809746e96","Type":"ContainerDied","Data":"fe6079af63eb74c307e9b9ef6c867c7fbe4f9baf9bae3717acf0882ffd36e3bd"} Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.123536 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6079af63eb74c307e9b9ef6c867c7fbe4f9baf9bae3717acf0882ffd36e3bd" Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.123554 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.165457 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.174010 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:45:07 crc kubenswrapper[4869]: I0202 15:45:07.478508 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" path="/var/lib/kubelet/pods/2f7b8e70-b003-44d3-92f8-f3537d98f42f/volumes" Feb 02 15:45:10 crc kubenswrapper[4869]: I0202 15:45:10.463126 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:45:10 crc kubenswrapper[4869]: E0202 15:45:10.464061 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:45:25 crc kubenswrapper[4869]: I0202 15:45:25.463001 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:45:25 crc kubenswrapper[4869]: E0202 15:45:25.464097 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:45:31 crc kubenswrapper[4869]: I0202 15:45:31.817344 4869 scope.go:117] "RemoveContainer" containerID="59bc9e2bf2a33d0613a4b3662bade576d4b886a4ed9586484e6fdba35d1e7e34" Feb 02 15:45:36 crc kubenswrapper[4869]: I0202 15:45:36.463348 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:45:36 crc kubenswrapper[4869]: E0202 15:45:36.464387 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:45:48 crc kubenswrapper[4869]: I0202 15:45:48.462032 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:45:48 crc kubenswrapper[4869]: E0202 15:45:48.462831 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:01 crc kubenswrapper[4869]: I0202 15:46:01.462751 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:01 crc kubenswrapper[4869]: E0202 15:46:01.466437 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:16 crc kubenswrapper[4869]: I0202 15:46:16.463147 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:16 crc kubenswrapper[4869]: E0202 15:46:16.463896 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:28 crc kubenswrapper[4869]: I0202 15:46:28.462819 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:28 crc kubenswrapper[4869]: E0202 15:46:28.463866 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:41 crc kubenswrapper[4869]: I0202 15:46:41.462474 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:41 crc kubenswrapper[4869]: E0202 15:46:41.463272 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:53 crc kubenswrapper[4869]: I0202 15:46:53.462519 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:53 crc kubenswrapper[4869]: E0202 15:46:53.463364 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:05 crc kubenswrapper[4869]: I0202 15:47:05.462568 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:05 crc kubenswrapper[4869]: E0202 15:47:05.463346 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:18 crc kubenswrapper[4869]: I0202 15:47:18.463432 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:18 crc kubenswrapper[4869]: E0202 15:47:18.464373 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:29 crc kubenswrapper[4869]: I0202 15:47:29.470239 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:29 crc kubenswrapper[4869]: E0202 15:47:29.471122 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:42 crc kubenswrapper[4869]: I0202 15:47:42.463881 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:42 crc kubenswrapper[4869]: E0202 15:47:42.480497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:54 crc kubenswrapper[4869]: I0202 15:47:54.463134 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:54 crc kubenswrapper[4869]: E0202 15:47:54.464112 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:48:09 crc kubenswrapper[4869]: I0202 15:48:09.468591 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:48:09 crc kubenswrapper[4869]: E0202 15:48:09.469382 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:48:24 crc kubenswrapper[4869]: I0202 15:48:24.463114 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:48:24 crc kubenswrapper[4869]: E0202 15:48:24.465034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:48:39 crc kubenswrapper[4869]: I0202 15:48:39.469008 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:48:39 crc kubenswrapper[4869]: E0202 15:48:39.470027 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:48:54 crc kubenswrapper[4869]: I0202 15:48:54.463366 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:48:54 crc kubenswrapper[4869]: E0202 15:48:54.464291 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:49:07 crc kubenswrapper[4869]: I0202 15:49:07.463139 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:49:07 crc kubenswrapper[4869]: E0202 15:49:07.464150 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:49:22 crc kubenswrapper[4869]: I0202 15:49:22.463046 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:49:22 crc kubenswrapper[4869]: E0202 15:49:22.465197 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:49:37 crc kubenswrapper[4869]: I0202 15:49:37.463194 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:49:37 crc kubenswrapper[4869]: E0202 15:49:37.464016 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:49:50 crc kubenswrapper[4869]: I0202 15:49:50.463735 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:49:51 crc kubenswrapper[4869]: I0202 15:49:51.452002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4"} Feb 02 15:52:15 crc kubenswrapper[4869]: I0202 15:52:15.303902 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:52:15 crc kubenswrapper[4869]: I0202 15:52:15.304547 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:52:45 crc kubenswrapper[4869]: I0202 15:52:45.304371 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:52:45 crc kubenswrapper[4869]: I0202 15:52:45.304955 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.304015 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.304592 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.304638 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.305432 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.305478 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4" gracePeriod=600 Feb 02 15:53:16 crc kubenswrapper[4869]: I0202 15:53:16.313697 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4" exitCode=0 Feb 02 15:53:16 crc kubenswrapper[4869]: I0202 15:53:16.313756 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4"} Feb 02 15:53:16 crc kubenswrapper[4869]: I0202 15:53:16.314032 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715"} Feb 02 15:53:16 crc kubenswrapper[4869]: I0202 15:53:16.314055 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:55:15 crc kubenswrapper[4869]: I0202 15:55:15.303861 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:55:15 crc kubenswrapper[4869]: I0202 15:55:15.304427 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:55:45 crc kubenswrapper[4869]: I0202 15:55:45.304502 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:55:45 crc kubenswrapper[4869]: I0202 15:55:45.305165 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.414238 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415335 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415356 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415378 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415386 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415402 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415409 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415422 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415429 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415448 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415455 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415463 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0000345e-eabc-4888-acdb-00c809746e96" containerName="collect-profiles" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415470 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0000345e-eabc-4888-acdb-00c809746e96" containerName="collect-profiles" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415481 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415487 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415499 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415506 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415521 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415530 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415547 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415555 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415797 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415811 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0000345e-eabc-4888-acdb-00c809746e96" containerName="collect-profiles" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415822 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415838 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.417456 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.436507 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.574854 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.575014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.575285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.676793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.676869 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.676953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.677398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.677440 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.872057 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.040681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.519149 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.901143 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerID="6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608" exitCode=0 Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.901192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerDied","Data":"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608"} Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.901226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerStarted","Data":"1d1ed7f54b361397932f6778687359fc99d59b970ece5722346717305c71da45"} Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.904049 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:56:02 crc kubenswrapper[4869]: I0202 15:56:02.913776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerStarted","Data":"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2"} Feb 02 15:56:03 crc kubenswrapper[4869]: E0202 15:56:03.088101 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f1a097c_7ace_42fd_9cff_7361112e8226.slice/crio-conmon-5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2.scope\": RecentStats: unable to find data in memory cache]" Feb 02 15:56:03 crc kubenswrapper[4869]: I0202 15:56:03.927393 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerID="5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2" exitCode=0 Feb 02 15:56:03 crc kubenswrapper[4869]: I0202 15:56:03.927448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerDied","Data":"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2"} Feb 02 15:56:04 crc kubenswrapper[4869]: I0202 15:56:04.938541 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerStarted","Data":"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8"} Feb 02 15:56:04 crc kubenswrapper[4869]: I0202 15:56:04.963604 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kzj25" podStartSLOduration=2.394134332 podStartE2EDuration="4.963586091s" podCreationTimestamp="2026-02-02 15:56:00 +0000 UTC" firstStartedPulling="2026-02-02 15:56:01.903622131 +0000 UTC m=+4963.548258911" lastFinishedPulling="2026-02-02 15:56:04.47307391 +0000 UTC m=+4966.117710670" observedRunningTime="2026-02-02 15:56:04.959181375 +0000 UTC m=+4966.603818145" watchObservedRunningTime="2026-02-02 15:56:04.963586091 +0000 UTC m=+4966.608222851" Feb 02 15:56:11 crc kubenswrapper[4869]: I0202 15:56:11.042247 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:11 crc kubenswrapper[4869]: I0202 15:56:11.042759 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:11 crc kubenswrapper[4869]: I0202 15:56:11.111625 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:12 crc kubenswrapper[4869]: I0202 15:56:12.215688 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:12 crc kubenswrapper[4869]: I0202 15:56:12.270646 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.019025 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kzj25" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="registry-server" containerID="cri-o://aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" gracePeriod=2 Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.449711 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.500370 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") pod \"2f1a097c-7ace-42fd-9cff-7361112e8226\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.500474 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") pod \"2f1a097c-7ace-42fd-9cff-7361112e8226\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.500668 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") pod \"2f1a097c-7ace-42fd-9cff-7361112e8226\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.502550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities" (OuterVolumeSpecName: "utilities") pod "2f1a097c-7ace-42fd-9cff-7361112e8226" (UID: "2f1a097c-7ace-42fd-9cff-7361112e8226"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.509566 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n" (OuterVolumeSpecName: "kube-api-access-qc46n") pod "2f1a097c-7ace-42fd-9cff-7361112e8226" (UID: "2f1a097c-7ace-42fd-9cff-7361112e8226"). InnerVolumeSpecName "kube-api-access-qc46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.554958 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f1a097c-7ace-42fd-9cff-7361112e8226" (UID: "2f1a097c-7ace-42fd-9cff-7361112e8226"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.603284 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.603337 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.603353 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030344 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerID="aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" exitCode=0 Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerDied","Data":"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8"} Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030421 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030456 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerDied","Data":"1d1ed7f54b361397932f6778687359fc99d59b970ece5722346717305c71da45"} Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030475 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030509 4869 scope.go:117] "RemoveContainer" containerID="aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.031330 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="registry-server" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.031354 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="registry-server" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.031365 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="extract-content" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.031372 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="extract-content" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.031428 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="extract-utilities" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.031437 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="extract-utilities" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.031703 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="registry-server" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.033365 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.074658 4869 scope.go:117] "RemoveContainer" containerID="5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.101962 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.112963 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.113128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.113173 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.126091 4869 scope.go:117] "RemoveContainer" containerID="6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.170839 4869 scope.go:117] "RemoveContainer" containerID="aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.182370 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8\": container with ID starting with aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8 not found: ID does not exist" containerID="aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.182424 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8"} err="failed to get container status \"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8\": rpc error: code = NotFound desc = could not find container \"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8\": container with ID starting with aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8 not found: ID does not exist" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.182458 4869 scope.go:117] "RemoveContainer" containerID="5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.183207 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2\": container with ID starting with 5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2 not found: ID does not exist" containerID="5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.183261 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2"} err="failed to get container status \"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2\": rpc error: code = NotFound desc = could not find container \"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2\": container with ID starting with 5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2 not found: ID does not exist" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.183290 4869 scope.go:117] "RemoveContainer" containerID="6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.183665 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608\": container with ID starting with 6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608 not found: ID does not exist" containerID="6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.183688 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608"} err="failed to get container status \"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608\": rpc error: code = NotFound desc = could not find container \"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608\": container with ID starting with 6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608 not found: ID does not exist" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.186196 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.197009 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.215239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.215517 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.215659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.215854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.216033 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.238096 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.304791 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.304863 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.304945 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.305912 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.306054 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" gracePeriod=600 Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.409359 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.440283 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.478027 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" path="/var/lib/kubelet/pods/2f1a097c-7ace-42fd-9cff-7361112e8226/volumes" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.936812 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.041071 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerStarted","Data":"3a7c65907adb73b71465ec45c8d0a735be7267b5d9f38d33359388e78eaded22"} Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.048181 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" exitCode=0 Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.048231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715"} Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.048266 4869 scope.go:117] "RemoveContainer" containerID="53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4" Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.049454 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:56:16 crc kubenswrapper[4869]: E0202 15:56:16.049988 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:56:17 crc kubenswrapper[4869]: I0202 15:56:17.058229 4869 generic.go:334] "Generic (PLEG): container finished" podID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerID="9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db" exitCode=0 Feb 02 15:56:17 crc kubenswrapper[4869]: I0202 15:56:17.058267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerDied","Data":"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db"} Feb 02 15:56:18 crc kubenswrapper[4869]: I0202 15:56:18.071592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerStarted","Data":"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2"} Feb 02 15:56:19 crc kubenswrapper[4869]: I0202 15:56:19.081755 4869 generic.go:334] "Generic (PLEG): container finished" podID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerID="4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2" exitCode=0 Feb 02 15:56:19 crc kubenswrapper[4869]: I0202 15:56:19.081815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerDied","Data":"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2"} Feb 02 15:56:20 crc kubenswrapper[4869]: I0202 15:56:20.091793 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerStarted","Data":"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf"} Feb 02 15:56:20 crc kubenswrapper[4869]: I0202 15:56:20.123050 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpdrs" podStartSLOduration=2.667870964 podStartE2EDuration="5.123017452s" podCreationTimestamp="2026-02-02 15:56:15 +0000 UTC" firstStartedPulling="2026-02-02 15:56:17.059876467 +0000 UTC m=+4978.704513237" lastFinishedPulling="2026-02-02 15:56:19.515022955 +0000 UTC m=+4981.159659725" observedRunningTime="2026-02-02 15:56:20.116850214 +0000 UTC m=+4981.761486984" watchObservedRunningTime="2026-02-02 15:56:20.123017452 +0000 UTC m=+4981.767654222" Feb 02 15:56:25 crc kubenswrapper[4869]: I0202 15:56:25.410241 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:25 crc kubenswrapper[4869]: I0202 15:56:25.410795 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:25 crc kubenswrapper[4869]: I0202 15:56:25.484799 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:26 crc kubenswrapper[4869]: I0202 15:56:26.218989 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:26 crc kubenswrapper[4869]: I0202 15:56:26.268336 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:27 crc kubenswrapper[4869]: I0202 15:56:27.463860 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:56:27 crc kubenswrapper[4869]: E0202 15:56:27.464508 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.179433 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vpdrs" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="registry-server" containerID="cri-o://7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" gracePeriod=2 Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.640029 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.792590 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") pod \"c818aa24-fa5f-4240-9b0b-66d16f60329e\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.792673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") pod \"c818aa24-fa5f-4240-9b0b-66d16f60329e\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.792843 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") pod \"c818aa24-fa5f-4240-9b0b-66d16f60329e\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.794190 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities" (OuterVolumeSpecName: "utilities") pod "c818aa24-fa5f-4240-9b0b-66d16f60329e" (UID: "c818aa24-fa5f-4240-9b0b-66d16f60329e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.806218 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn" (OuterVolumeSpecName: "kube-api-access-29dvn") pod "c818aa24-fa5f-4240-9b0b-66d16f60329e" (UID: "c818aa24-fa5f-4240-9b0b-66d16f60329e"). InnerVolumeSpecName "kube-api-access-29dvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.857260 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c818aa24-fa5f-4240-9b0b-66d16f60329e" (UID: "c818aa24-fa5f-4240-9b0b-66d16f60329e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.895450 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.895489 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.895501 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200383 4869 generic.go:334] "Generic (PLEG): container finished" podID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerID="7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" exitCode=0 Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200444 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerDied","Data":"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf"} Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200459 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200484 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerDied","Data":"3a7c65907adb73b71465ec45c8d0a735be7267b5d9f38d33359388e78eaded22"} Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200512 4869 scope.go:117] "RemoveContainer" containerID="7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.232118 4869 scope.go:117] "RemoveContainer" containerID="4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.238528 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.247954 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.263189 4869 scope.go:117] "RemoveContainer" containerID="9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.304025 4869 scope.go:117] "RemoveContainer" containerID="7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" Feb 02 15:56:29 crc kubenswrapper[4869]: E0202 15:56:29.304447 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf\": container with ID starting with 7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf not found: ID does not exist" containerID="7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.304488 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf"} err="failed to get container status \"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf\": rpc error: code = NotFound desc = could not find container \"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf\": container with ID starting with 7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf not found: ID does not exist" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.304513 4869 scope.go:117] "RemoveContainer" containerID="4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2" Feb 02 15:56:29 crc kubenswrapper[4869]: E0202 15:56:29.304978 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2\": container with ID starting with 4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2 not found: ID does not exist" containerID="4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.305004 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2"} err="failed to get container status \"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2\": rpc error: code = NotFound desc = could not find container \"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2\": container with ID starting with 4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2 not found: ID does not exist" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.305020 4869 scope.go:117] "RemoveContainer" containerID="9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db" Feb 02 15:56:29 crc kubenswrapper[4869]: E0202 15:56:29.305285 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db\": container with ID starting with 9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db not found: ID does not exist" containerID="9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.305311 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db"} err="failed to get container status \"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db\": rpc error: code = NotFound desc = could not find container \"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db\": container with ID starting with 9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db not found: ID does not exist" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.474738 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" path="/var/lib/kubelet/pods/c818aa24-fa5f-4240-9b0b-66d16f60329e/volumes" Feb 02 15:56:38 crc kubenswrapper[4869]: I0202 15:56:38.462502 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:56:38 crc kubenswrapper[4869]: E0202 15:56:38.463344 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:56:49 crc kubenswrapper[4869]: I0202 15:56:49.469553 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:56:49 crc kubenswrapper[4869]: E0202 15:56:49.470579 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:00 crc kubenswrapper[4869]: I0202 15:57:00.462864 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:00 crc kubenswrapper[4869]: E0202 15:57:00.464050 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:13 crc kubenswrapper[4869]: I0202 15:57:13.463035 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:13 crc kubenswrapper[4869]: E0202 15:57:13.463705 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:25 crc kubenswrapper[4869]: I0202 15:57:25.463529 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:25 crc kubenswrapper[4869]: E0202 15:57:25.464506 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:39 crc kubenswrapper[4869]: I0202 15:57:39.476777 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:39 crc kubenswrapper[4869]: E0202 15:57:39.477837 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:53 crc kubenswrapper[4869]: I0202 15:57:53.463218 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:53 crc kubenswrapper[4869]: E0202 15:57:53.463994 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:58:07 crc kubenswrapper[4869]: I0202 15:58:07.463134 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:58:07 crc kubenswrapper[4869]: E0202 15:58:07.475808 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:58:21 crc kubenswrapper[4869]: I0202 15:58:21.463653 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:58:21 crc kubenswrapper[4869]: E0202 15:58:21.465171 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:58:35 crc kubenswrapper[4869]: I0202 15:58:35.463560 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:58:35 crc kubenswrapper[4869]: E0202 15:58:35.464540 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:58:47 crc kubenswrapper[4869]: I0202 15:58:47.462207 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:58:47 crc kubenswrapper[4869]: E0202 15:58:47.463050 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:00 crc kubenswrapper[4869]: I0202 15:59:00.463774 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:00 crc kubenswrapper[4869]: E0202 15:59:00.465111 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:15 crc kubenswrapper[4869]: I0202 15:59:15.463228 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:15 crc kubenswrapper[4869]: E0202 15:59:15.463901 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.272966 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:24 crc kubenswrapper[4869]: E0202 15:59:24.274314 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="extract-utilities" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.274339 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="extract-utilities" Feb 02 15:59:24 crc kubenswrapper[4869]: E0202 15:59:24.274389 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="extract-content" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.274401 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="extract-content" Feb 02 15:59:24 crc kubenswrapper[4869]: E0202 15:59:24.274433 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="registry-server" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.274446 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="registry-server" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.274825 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="registry-server" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.277402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.281842 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.417227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.417356 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.417471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.519501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.519606 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.519684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.520135 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.520606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.553020 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.607645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:25 crc kubenswrapper[4869]: I0202 15:59:25.110375 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:25 crc kubenswrapper[4869]: I0202 15:59:25.900111 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerID="8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166" exitCode=0 Feb 02 15:59:25 crc kubenswrapper[4869]: I0202 15:59:25.900174 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerDied","Data":"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166"} Feb 02 15:59:25 crc kubenswrapper[4869]: I0202 15:59:25.900205 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerStarted","Data":"6baa853afe0f90fe9a7256d9639c7a83d812486db1067c2f6feceebd747b7a24"} Feb 02 15:59:27 crc kubenswrapper[4869]: I0202 15:59:27.918374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerStarted","Data":"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed"} Feb 02 15:59:28 crc kubenswrapper[4869]: I0202 15:59:28.463852 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:28 crc kubenswrapper[4869]: E0202 15:59:28.464357 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:30 crc kubenswrapper[4869]: I0202 15:59:30.949503 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerID="afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed" exitCode=0 Feb 02 15:59:30 crc kubenswrapper[4869]: I0202 15:59:30.949591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerDied","Data":"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed"} Feb 02 15:59:31 crc kubenswrapper[4869]: I0202 15:59:31.963665 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerStarted","Data":"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155"} Feb 02 15:59:32 crc kubenswrapper[4869]: I0202 15:59:32.001150 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2vgn" podStartSLOduration=2.532055345 podStartE2EDuration="8.001124226s" podCreationTimestamp="2026-02-02 15:59:24 +0000 UTC" firstStartedPulling="2026-02-02 15:59:25.902421137 +0000 UTC m=+5167.547057907" lastFinishedPulling="2026-02-02 15:59:31.371490008 +0000 UTC m=+5173.016126788" observedRunningTime="2026-02-02 15:59:31.989333942 +0000 UTC m=+5173.633970712" watchObservedRunningTime="2026-02-02 15:59:32.001124226 +0000 UTC m=+5173.645761046" Feb 02 15:59:34 crc kubenswrapper[4869]: I0202 15:59:34.607860 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:34 crc kubenswrapper[4869]: I0202 15:59:34.609051 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:35 crc kubenswrapper[4869]: I0202 15:59:35.664508 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2vgn" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" probeResult="failure" output=< Feb 02 15:59:35 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:59:35 crc kubenswrapper[4869]: > Feb 02 15:59:40 crc kubenswrapper[4869]: I0202 15:59:40.464021 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:40 crc kubenswrapper[4869]: E0202 15:59:40.465131 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:44 crc kubenswrapper[4869]: I0202 15:59:44.710143 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:44 crc kubenswrapper[4869]: I0202 15:59:44.769312 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:44 crc kubenswrapper[4869]: I0202 15:59:44.946483 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.107415 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2vgn" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" containerID="cri-o://d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" gracePeriod=2 Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.642939 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.820204 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") pod \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.820316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") pod \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.820338 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") pod \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.821685 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities" (OuterVolumeSpecName: "utilities") pod "3cabbeee-42cb-4803-a4fd-e0cf4845d192" (UID: "3cabbeee-42cb-4803-a4fd-e0cf4845d192"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.829135 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt" (OuterVolumeSpecName: "kube-api-access-d9vzt") pod "3cabbeee-42cb-4803-a4fd-e0cf4845d192" (UID: "3cabbeee-42cb-4803-a4fd-e0cf4845d192"). InnerVolumeSpecName "kube-api-access-d9vzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.923528 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") on node \"crc\" DevicePath \"\"" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.923583 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.937759 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cabbeee-42cb-4803-a4fd-e0cf4845d192" (UID: "3cabbeee-42cb-4803-a4fd-e0cf4845d192"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.025723 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122836 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerID="d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" exitCode=0 Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122881 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerDied","Data":"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155"} Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122924 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerDied","Data":"6baa853afe0f90fe9a7256d9639c7a83d812486db1067c2f6feceebd747b7a24"} Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122979 4869 scope.go:117] "RemoveContainer" containerID="d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.168041 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.168228 4869 scope.go:117] "RemoveContainer" containerID="afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.173414 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.206975 4869 scope.go:117] "RemoveContainer" containerID="8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.268798 4869 scope.go:117] "RemoveContainer" containerID="d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" Feb 02 15:59:47 crc kubenswrapper[4869]: E0202 15:59:47.269340 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155\": container with ID starting with d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155 not found: ID does not exist" containerID="d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.269383 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155"} err="failed to get container status \"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155\": rpc error: code = NotFound desc = could not find container \"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155\": container with ID starting with d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155 not found: ID does not exist" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.269409 4869 scope.go:117] "RemoveContainer" containerID="afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed" Feb 02 15:59:47 crc kubenswrapper[4869]: E0202 15:59:47.269660 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed\": container with ID starting with afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed not found: ID does not exist" containerID="afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.269704 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed"} err="failed to get container status \"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed\": rpc error: code = NotFound desc = could not find container \"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed\": container with ID starting with afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed not found: ID does not exist" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.269717 4869 scope.go:117] "RemoveContainer" containerID="8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166" Feb 02 15:59:47 crc kubenswrapper[4869]: E0202 15:59:47.270038 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166\": container with ID starting with 8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166 not found: ID does not exist" containerID="8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.270059 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166"} err="failed to get container status \"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166\": rpc error: code = NotFound desc = could not find container \"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166\": container with ID starting with 8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166 not found: ID does not exist" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.473389 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" path="/var/lib/kubelet/pods/3cabbeee-42cb-4803-a4fd-e0cf4845d192/volumes" Feb 02 15:59:54 crc kubenswrapper[4869]: I0202 15:59:54.463577 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:54 crc kubenswrapper[4869]: E0202 15:59:54.464293 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.162678 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs"] Feb 02 16:00:00 crc kubenswrapper[4869]: E0202 16:00:00.163702 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.163718 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" Feb 02 16:00:00 crc kubenswrapper[4869]: E0202 16:00:00.163741 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="extract-content" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.163749 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="extract-content" Feb 02 16:00:00 crc kubenswrapper[4869]: E0202 16:00:00.163763 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="extract-utilities" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.163772 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="extract-utilities" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.164086 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.164840 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.167624 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.169716 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.190425 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs"] Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.226779 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.226919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.226947 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.328721 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.328764 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.328888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.329880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.334870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.355144 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.492223 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.992526 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs"] Feb 02 16:00:01 crc kubenswrapper[4869]: I0202 16:00:01.284317 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" event={"ID":"2a96ca5f-1cc6-4490-9db4-56f297abcbcf","Type":"ContainerStarted","Data":"46292c3441f018ca4dd7c614a9eecb2a4574facde6b3dd81d20d43ae16aca676"} Feb 02 16:00:01 crc kubenswrapper[4869]: I0202 16:00:01.284369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" event={"ID":"2a96ca5f-1cc6-4490-9db4-56f297abcbcf","Type":"ContainerStarted","Data":"039837de22242761a89dbabfc668e7ba6a60ec68f859298d86cbad8eee7e0fa2"} Feb 02 16:00:01 crc kubenswrapper[4869]: I0202 16:00:01.309966 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" podStartSLOduration=1.309948258 podStartE2EDuration="1.309948258s" podCreationTimestamp="2026-02-02 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 16:00:01.306540767 +0000 UTC m=+5202.951177537" watchObservedRunningTime="2026-02-02 16:00:01.309948258 +0000 UTC m=+5202.954585028" Feb 02 16:00:02 crc kubenswrapper[4869]: I0202 16:00:02.293794 4869 generic.go:334] "Generic (PLEG): container finished" podID="2a96ca5f-1cc6-4490-9db4-56f297abcbcf" containerID="46292c3441f018ca4dd7c614a9eecb2a4574facde6b3dd81d20d43ae16aca676" exitCode=0 Feb 02 16:00:02 crc kubenswrapper[4869]: I0202 16:00:02.293849 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" event={"ID":"2a96ca5f-1cc6-4490-9db4-56f297abcbcf","Type":"ContainerDied","Data":"46292c3441f018ca4dd7c614a9eecb2a4574facde6b3dd81d20d43ae16aca676"} Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.693540 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.805311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") pod \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.805476 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") pod \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.805504 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") pod \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.806629 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a96ca5f-1cc6-4490-9db4-56f297abcbcf" (UID: "2a96ca5f-1cc6-4490-9db4-56f297abcbcf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.811125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s" (OuterVolumeSpecName: "kube-api-access-2tt8s") pod "2a96ca5f-1cc6-4490-9db4-56f297abcbcf" (UID: "2a96ca5f-1cc6-4490-9db4-56f297abcbcf"). InnerVolumeSpecName "kube-api-access-2tt8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.811636 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a96ca5f-1cc6-4490-9db4-56f297abcbcf" (UID: "2a96ca5f-1cc6-4490-9db4-56f297abcbcf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.908294 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.908338 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") on node \"crc\" DevicePath \"\"" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.908348 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.318436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" event={"ID":"2a96ca5f-1cc6-4490-9db4-56f297abcbcf","Type":"ContainerDied","Data":"039837de22242761a89dbabfc668e7ba6a60ec68f859298d86cbad8eee7e0fa2"} Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.318494 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="039837de22242761a89dbabfc668e7ba6a60ec68f859298d86cbad8eee7e0fa2" Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.318524 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.786207 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.795340 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 16:00:05 crc kubenswrapper[4869]: I0202 16:00:05.463662 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:05 crc kubenswrapper[4869]: E0202 16:00:05.464699 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:05 crc kubenswrapper[4869]: I0202 16:00:05.484576 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" path="/var/lib/kubelet/pods/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c/volumes" Feb 02 16:00:19 crc kubenswrapper[4869]: I0202 16:00:19.495996 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:19 crc kubenswrapper[4869]: E0202 16:00:19.497164 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:32 crc kubenswrapper[4869]: I0202 16:00:32.250220 4869 scope.go:117] "RemoveContainer" containerID="1ee657e7e391fb0be0a60133a3c2bc04a0767f387cf6cc279ee259f05131226f" Feb 02 16:00:32 crc kubenswrapper[4869]: I0202 16:00:32.462950 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:32 crc kubenswrapper[4869]: E0202 16:00:32.463237 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:45 crc kubenswrapper[4869]: I0202 16:00:45.463623 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:45 crc kubenswrapper[4869]: E0202 16:00:45.464452 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:59 crc kubenswrapper[4869]: I0202 16:00:59.477763 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:59 crc kubenswrapper[4869]: E0202 16:00:59.478656 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:59 crc kubenswrapper[4869]: I0202 16:00:59.831086 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ccbb21f-23d9-48be-a212-547e064326f6" containerID="ac9a60d8c10f53a0410a3a801abad85986e73c2832d375d41caefea008863171" exitCode=1 Feb 02 16:00:59 crc kubenswrapper[4869]: I0202 16:00:59.831130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccbb21f-23d9-48be-a212-547e064326f6","Type":"ContainerDied","Data":"ac9a60d8c10f53a0410a3a801abad85986e73c2832d375d41caefea008863171"} Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.165416 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500801-n7swm"] Feb 02 16:01:00 crc kubenswrapper[4869]: E0202 16:01:00.166129 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a96ca5f-1cc6-4490-9db4-56f297abcbcf" containerName="collect-profiles" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.166141 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a96ca5f-1cc6-4490-9db4-56f297abcbcf" containerName="collect-profiles" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.166363 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a96ca5f-1cc6-4490-9db4-56f297abcbcf" containerName="collect-profiles" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.167033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.173114 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500801-n7swm"] Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.290383 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.290482 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.290519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.290642 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.392952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.393025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.393135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.393224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.773089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.773131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.773550 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.784070 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.844444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.307288 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.389653 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500801-n7swm"] Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414577 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414684 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414723 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414748 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414831 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414989 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.415396 4869 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.416095 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data" (OuterVolumeSpecName: "config-data") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.419046 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.419207 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj" (OuterVolumeSpecName: "kube-api-access-zh7qj") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "kube-api-access-zh7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.421267 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.446199 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.448559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.459405 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.483353 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517817 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517887 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517903 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517939 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517954 4869 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517967 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517981 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517993 4869 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.548632 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.620228 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.852028 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccbb21f-23d9-48be-a212-547e064326f6","Type":"ContainerDied","Data":"c08d2dd97b8a58de7b4399802e9fdd669c46ddb7f1d0f2a64a4f17afc41bb15d"} Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.852078 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08d2dd97b8a58de7b4399802e9fdd669c46ddb7f1d0f2a64a4f17afc41bb15d" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.852143 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.861482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500801-n7swm" event={"ID":"35e8f12b-8b8b-4309-a57e-e46c357acc6d","Type":"ContainerStarted","Data":"4c40d44f0adae652ed9d418c3153ac6f7654d77d457608da8a24a0570aeaf2b9"} Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.861526 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500801-n7swm" event={"ID":"35e8f12b-8b8b-4309-a57e-e46c357acc6d","Type":"ContainerStarted","Data":"1cefb99e5f64ca437ace858bea1e79a6cd5a8188aa2807064555af4b66cedc09"} Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.886098 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500801-n7swm" podStartSLOduration=1.886079536 podStartE2EDuration="1.886079536s" podCreationTimestamp="2026-02-02 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 16:01:01.879680201 +0000 UTC m=+5263.524316991" watchObservedRunningTime="2026-02-02 16:01:01.886079536 +0000 UTC m=+5263.530716306" Feb 02 16:01:04 crc kubenswrapper[4869]: I0202 16:01:04.887083 4869 generic.go:334] "Generic (PLEG): container finished" podID="35e8f12b-8b8b-4309-a57e-e46c357acc6d" containerID="4c40d44f0adae652ed9d418c3153ac6f7654d77d457608da8a24a0570aeaf2b9" exitCode=0 Feb 02 16:01:04 crc kubenswrapper[4869]: I0202 16:01:04.887152 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500801-n7swm" event={"ID":"35e8f12b-8b8b-4309-a57e-e46c357acc6d","Type":"ContainerDied","Data":"4c40d44f0adae652ed9d418c3153ac6f7654d77d457608da8a24a0570aeaf2b9"} Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.256714 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.316978 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") pod \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.317111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") pod \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.317155 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") pod \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.317275 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") pod \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.332189 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "35e8f12b-8b8b-4309-a57e-e46c357acc6d" (UID: "35e8f12b-8b8b-4309-a57e-e46c357acc6d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.332442 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4" (OuterVolumeSpecName: "kube-api-access-xf4x4") pod "35e8f12b-8b8b-4309-a57e-e46c357acc6d" (UID: "35e8f12b-8b8b-4309-a57e-e46c357acc6d"). InnerVolumeSpecName "kube-api-access-xf4x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.346000 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35e8f12b-8b8b-4309-a57e-e46c357acc6d" (UID: "35e8f12b-8b8b-4309-a57e-e46c357acc6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.384853 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data" (OuterVolumeSpecName: "config-data") pod "35e8f12b-8b8b-4309-a57e-e46c357acc6d" (UID: "35e8f12b-8b8b-4309-a57e-e46c357acc6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.420118 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.420158 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.420168 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.420176 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.905012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500801-n7swm" event={"ID":"35e8f12b-8b8b-4309-a57e-e46c357acc6d","Type":"ContainerDied","Data":"1cefb99e5f64ca437ace858bea1e79a6cd5a8188aa2807064555af4b66cedc09"} Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.905047 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cefb99e5f64ca437ace858bea1e79a6cd5a8188aa2807064555af4b66cedc09" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.905072 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.972149 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 16:01:10 crc kubenswrapper[4869]: E0202 16:01:10.973250 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" containerName="tempest-tests-tempest-tests-runner" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.973268 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" containerName="tempest-tests-tempest-tests-runner" Feb 02 16:01:10 crc kubenswrapper[4869]: E0202 16:01:10.973290 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e8f12b-8b8b-4309-a57e-e46c357acc6d" containerName="keystone-cron" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.973298 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e8f12b-8b8b-4309-a57e-e46c357acc6d" containerName="keystone-cron" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.973574 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e8f12b-8b8b-4309-a57e-e46c357acc6d" containerName="keystone-cron" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.973598 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" containerName="tempest-tests-tempest-tests-runner" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.974380 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.978332 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-72k4z" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.987604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.038344 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.038421 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmw8\" (UniqueName: \"kubernetes.io/projected/6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d-kube-api-access-7bmw8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.140041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.140163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bmw8\" (UniqueName: \"kubernetes.io/projected/6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d-kube-api-access-7bmw8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.140723 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.158573 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bmw8\" (UniqueName: \"kubernetes.io/projected/6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d-kube-api-access-7bmw8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.164279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.236495 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.243650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.278185 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.301394 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.345470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.345524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.345549 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448923 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.466315 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:01:11 crc kubenswrapper[4869]: E0202 16:01:11.467029 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.486724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.581883 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.747244 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.759540 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.957362 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d","Type":"ContainerStarted","Data":"192ffe3191ab6cc78dc87919064441ec6c892ea35f60414236288b735b2f6893"} Feb 02 16:01:12 crc kubenswrapper[4869]: I0202 16:01:12.050993 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:12 crc kubenswrapper[4869]: I0202 16:01:12.969414 4869 generic.go:334] "Generic (PLEG): container finished" podID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerID="e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1" exitCode=0 Feb 02 16:01:12 crc kubenswrapper[4869]: I0202 16:01:12.969452 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerDied","Data":"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1"} Feb 02 16:01:12 crc kubenswrapper[4869]: I0202 16:01:12.970039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerStarted","Data":"8c561c68a8d80261d3ae57c5116c0d78271a1ec102819936dbd21831ba6c58c2"} Feb 02 16:01:13 crc kubenswrapper[4869]: I0202 16:01:13.980076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerStarted","Data":"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3"} Feb 02 16:01:13 crc kubenswrapper[4869]: I0202 16:01:13.981762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d","Type":"ContainerStarted","Data":"571ffcdf208ee41bf7942053a1cb2d0aa05f16787f1a599db9876ed5d2b2f4ce"} Feb 02 16:01:14 crc kubenswrapper[4869]: I0202 16:01:14.016251 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.9207120399999997 podStartE2EDuration="4.016230358s" podCreationTimestamp="2026-02-02 16:01:10 +0000 UTC" firstStartedPulling="2026-02-02 16:01:11.759322295 +0000 UTC m=+5273.403959065" lastFinishedPulling="2026-02-02 16:01:12.854840613 +0000 UTC m=+5274.499477383" observedRunningTime="2026-02-02 16:01:14.010198643 +0000 UTC m=+5275.654835413" watchObservedRunningTime="2026-02-02 16:01:14.016230358 +0000 UTC m=+5275.660867128" Feb 02 16:01:14 crc kubenswrapper[4869]: I0202 16:01:14.993250 4869 generic.go:334] "Generic (PLEG): container finished" podID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerID="ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3" exitCode=0 Feb 02 16:01:14 crc kubenswrapper[4869]: I0202 16:01:14.993305 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerDied","Data":"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3"} Feb 02 16:01:16 crc kubenswrapper[4869]: I0202 16:01:16.003131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerStarted","Data":"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6"} Feb 02 16:01:16 crc kubenswrapper[4869]: I0202 16:01:16.033722 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nb88j" podStartSLOduration=2.6043043409999997 podStartE2EDuration="5.033697565s" podCreationTimestamp="2026-02-02 16:01:11 +0000 UTC" firstStartedPulling="2026-02-02 16:01:12.970853331 +0000 UTC m=+5274.615490101" lastFinishedPulling="2026-02-02 16:01:15.400246555 +0000 UTC m=+5277.044883325" observedRunningTime="2026-02-02 16:01:16.027391703 +0000 UTC m=+5277.672028473" watchObservedRunningTime="2026-02-02 16:01:16.033697565 +0000 UTC m=+5277.678334335" Feb 02 16:01:21 crc kubenswrapper[4869]: I0202 16:01:21.582455 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:21 crc kubenswrapper[4869]: I0202 16:01:21.584113 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:21 crc kubenswrapper[4869]: I0202 16:01:21.626351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:22 crc kubenswrapper[4869]: I0202 16:01:22.142444 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:22 crc kubenswrapper[4869]: I0202 16:01:22.184849 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:22 crc kubenswrapper[4869]: I0202 16:01:22.462807 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:01:23 crc kubenswrapper[4869]: I0202 16:01:23.067179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89"} Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.077656 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nb88j" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="registry-server" containerID="cri-o://8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" gracePeriod=2 Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.505194 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.621180 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") pod \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.621249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") pod \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.621316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") pod \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.623420 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities" (OuterVolumeSpecName: "utilities") pod "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" (UID: "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.632325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn" (OuterVolumeSpecName: "kube-api-access-x7qdn") pod "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" (UID: "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49"). InnerVolumeSpecName "kube-api-access-x7qdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.652513 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" (UID: "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.723721 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.724072 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.724084 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088183 4869 generic.go:334] "Generic (PLEG): container finished" podID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerID="8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" exitCode=0 Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerDied","Data":"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6"} Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088270 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerDied","Data":"8c561c68a8d80261d3ae57c5116c0d78271a1ec102819936dbd21831ba6c58c2"} Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088292 4869 scope.go:117] "RemoveContainer" containerID="8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088449 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.116862 4869 scope.go:117] "RemoveContainer" containerID="ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.137079 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.145697 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.170442 4869 scope.go:117] "RemoveContainer" containerID="e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.205435 4869 scope.go:117] "RemoveContainer" containerID="8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" Feb 02 16:01:25 crc kubenswrapper[4869]: E0202 16:01:25.205958 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6\": container with ID starting with 8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6 not found: ID does not exist" containerID="8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.206012 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6"} err="failed to get container status \"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6\": rpc error: code = NotFound desc = could not find container \"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6\": container with ID starting with 8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6 not found: ID does not exist" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.206046 4869 scope.go:117] "RemoveContainer" containerID="ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3" Feb 02 16:01:25 crc kubenswrapper[4869]: E0202 16:01:25.206721 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3\": container with ID starting with ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3 not found: ID does not exist" containerID="ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.206764 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3"} err="failed to get container status \"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3\": rpc error: code = NotFound desc = could not find container \"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3\": container with ID starting with ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3 not found: ID does not exist" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.206790 4869 scope.go:117] "RemoveContainer" containerID="e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1" Feb 02 16:01:25 crc kubenswrapper[4869]: E0202 16:01:25.207127 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1\": container with ID starting with e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1 not found: ID does not exist" containerID="e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.207156 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1"} err="failed to get container status \"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1\": rpc error: code = NotFound desc = could not find container \"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1\": container with ID starting with e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1 not found: ID does not exist" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.478640 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" path="/var/lib/kubelet/pods/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49/volumes" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.488934 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:01:53 crc kubenswrapper[4869]: E0202 16:01:53.489862 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="extract-content" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.489877 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="extract-content" Feb 02 16:01:53 crc kubenswrapper[4869]: E0202 16:01:53.489897 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="extract-utilities" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.489921 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="extract-utilities" Feb 02 16:01:53 crc kubenswrapper[4869]: E0202 16:01:53.489943 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="registry-server" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.489950 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="registry-server" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.490358 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="registry-server" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.491290 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.494064 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9szhh"/"default-dockercfg-kj6xn" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.499710 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9szhh"/"openshift-service-ca.crt" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.500176 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9szhh"/"kube-root-ca.crt" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.509188 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.546683 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.547004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.648732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.648843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.649444 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.683143 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.814815 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:54 crc kubenswrapper[4869]: I0202 16:01:54.277733 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:01:54 crc kubenswrapper[4869]: I0202 16:01:54.372953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/must-gather-wq69k" event={"ID":"56e87714-4847-4c2f-81a9-357123c1e872","Type":"ContainerStarted","Data":"ab7c4fdd48474e2f60641d8627c5c42465d5c53003bf3a4e726e765ab0daab84"} Feb 02 16:02:00 crc kubenswrapper[4869]: I0202 16:02:00.437934 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/must-gather-wq69k" event={"ID":"56e87714-4847-4c2f-81a9-357123c1e872","Type":"ContainerStarted","Data":"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1"} Feb 02 16:02:00 crc kubenswrapper[4869]: I0202 16:02:00.438447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/must-gather-wq69k" event={"ID":"56e87714-4847-4c2f-81a9-357123c1e872","Type":"ContainerStarted","Data":"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2"} Feb 02 16:02:00 crc kubenswrapper[4869]: I0202 16:02:00.461744 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9szhh/must-gather-wq69k" podStartSLOduration=2.561421871 podStartE2EDuration="7.46172156s" podCreationTimestamp="2026-02-02 16:01:53 +0000 UTC" firstStartedPulling="2026-02-02 16:01:54.285556044 +0000 UTC m=+5315.930192824" lastFinishedPulling="2026-02-02 16:01:59.185855743 +0000 UTC m=+5320.830492513" observedRunningTime="2026-02-02 16:02:00.453893922 +0000 UTC m=+5322.098530782" watchObservedRunningTime="2026-02-02 16:02:00.46172156 +0000 UTC m=+5322.106358330" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.112484 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9szhh/crc-debug-6r9jq"] Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.114747 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.216830 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.216935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.319342 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.319445 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.319508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.338533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.436309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.498827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" event={"ID":"4883a162-0123-4994-b91f-680ccb87e785","Type":"ContainerStarted","Data":"da9a1ba8d0e61d04f903c2e3c8eceb258c48f2a092d0744222de7809359d62f8"} Feb 02 16:02:17 crc kubenswrapper[4869]: I0202 16:02:17.608999 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" event={"ID":"4883a162-0123-4994-b91f-680ccb87e785","Type":"ContainerStarted","Data":"d472ad4cfffb6ce34fcab232f456faf2bc5c139884bc19851d79c2adff55a49f"} Feb 02 16:02:17 crc kubenswrapper[4869]: I0202 16:02:17.632704 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" podStartSLOduration=1.670547413 podStartE2EDuration="12.63267934s" podCreationTimestamp="2026-02-02 16:02:05 +0000 UTC" firstStartedPulling="2026-02-02 16:02:05.467518813 +0000 UTC m=+5327.112155573" lastFinishedPulling="2026-02-02 16:02:16.42965073 +0000 UTC m=+5338.074287500" observedRunningTime="2026-02-02 16:02:17.6194296 +0000 UTC m=+5339.264066370" watchObservedRunningTime="2026-02-02 16:02:17.63267934 +0000 UTC m=+5339.277316110" Feb 02 16:03:05 crc kubenswrapper[4869]: I0202 16:03:05.047551 4869 generic.go:334] "Generic (PLEG): container finished" podID="4883a162-0123-4994-b91f-680ccb87e785" containerID="d472ad4cfffb6ce34fcab232f456faf2bc5c139884bc19851d79c2adff55a49f" exitCode=0 Feb 02 16:03:05 crc kubenswrapper[4869]: I0202 16:03:05.047672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" event={"ID":"4883a162-0123-4994-b91f-680ccb87e785","Type":"ContainerDied","Data":"d472ad4cfffb6ce34fcab232f456faf2bc5c139884bc19851d79c2adff55a49f"} Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.209312 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.242603 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-6r9jq"] Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.250992 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-6r9jq"] Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.328318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") pod \"4883a162-0123-4994-b91f-680ccb87e785\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.328447 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host" (OuterVolumeSpecName: "host") pod "4883a162-0123-4994-b91f-680ccb87e785" (UID: "4883a162-0123-4994-b91f-680ccb87e785"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.328887 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") pod \"4883a162-0123-4994-b91f-680ccb87e785\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.329320 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.334403 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68" (OuterVolumeSpecName: "kube-api-access-vjf68") pod "4883a162-0123-4994-b91f-680ccb87e785" (UID: "4883a162-0123-4994-b91f-680ccb87e785"). InnerVolumeSpecName "kube-api-access-vjf68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.431624 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.072397 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9a1ba8d0e61d04f903c2e3c8eceb258c48f2a092d0744222de7809359d62f8" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.072462 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.441279 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9szhh/crc-debug-pztxt"] Feb 02 16:03:07 crc kubenswrapper[4869]: E0202 16:03:07.443827 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4883a162-0123-4994-b91f-680ccb87e785" containerName="container-00" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.443869 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4883a162-0123-4994-b91f-680ccb87e785" containerName="container-00" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.444182 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4883a162-0123-4994-b91f-680ccb87e785" containerName="container-00" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.445013 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.474509 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4883a162-0123-4994-b91f-680ccb87e785" path="/var/lib/kubelet/pods/4883a162-0123-4994-b91f-680ccb87e785/volumes" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.565162 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.565311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.667874 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.667988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.668119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.687477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.764656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:08 crc kubenswrapper[4869]: I0202 16:03:08.083823 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-pztxt" event={"ID":"4fb26728-ed2e-4205-b7f5-ca7a98b8c910","Type":"ContainerStarted","Data":"77f7b5d294b60bfbbe355f8b5327d53d20b5718b7bb4f2b6f233a898b734eaf7"} Feb 02 16:03:08 crc kubenswrapper[4869]: I0202 16:03:08.084301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-pztxt" event={"ID":"4fb26728-ed2e-4205-b7f5-ca7a98b8c910","Type":"ContainerStarted","Data":"5d263a760cd3c718d7a45fe4a9a6e935c14e4a22487dd64c8dac3deec49b788a"} Feb 02 16:03:08 crc kubenswrapper[4869]: I0202 16:03:08.106993 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9szhh/crc-debug-pztxt" podStartSLOduration=1.106968165 podStartE2EDuration="1.106968165s" podCreationTimestamp="2026-02-02 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 16:03:08.097162768 +0000 UTC m=+5389.741799538" watchObservedRunningTime="2026-02-02 16:03:08.106968165 +0000 UTC m=+5389.751604955" Feb 02 16:03:09 crc kubenswrapper[4869]: I0202 16:03:09.095345 4869 generic.go:334] "Generic (PLEG): container finished" podID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" containerID="77f7b5d294b60bfbbe355f8b5327d53d20b5718b7bb4f2b6f233a898b734eaf7" exitCode=0 Feb 02 16:03:09 crc kubenswrapper[4869]: I0202 16:03:09.095436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-pztxt" event={"ID":"4fb26728-ed2e-4205-b7f5-ca7a98b8c910","Type":"ContainerDied","Data":"77f7b5d294b60bfbbe355f8b5327d53d20b5718b7bb4f2b6f233a898b734eaf7"} Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.202726 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.321032 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") pod \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.321115 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") pod \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.322425 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host" (OuterVolumeSpecName: "host") pod "4fb26728-ed2e-4205-b7f5-ca7a98b8c910" (UID: "4fb26728-ed2e-4205-b7f5-ca7a98b8c910"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.328782 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk" (OuterVolumeSpecName: "kube-api-access-h66hk") pod "4fb26728-ed2e-4205-b7f5-ca7a98b8c910" (UID: "4fb26728-ed2e-4205-b7f5-ca7a98b8c910"). InnerVolumeSpecName "kube-api-access-h66hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.423815 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.423852 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.849431 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-pztxt"] Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.860421 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-pztxt"] Feb 02 16:03:11 crc kubenswrapper[4869]: I0202 16:03:11.118759 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d263a760cd3c718d7a45fe4a9a6e935c14e4a22487dd64c8dac3deec49b788a" Feb 02 16:03:11 crc kubenswrapper[4869]: I0202 16:03:11.118871 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:11 crc kubenswrapper[4869]: I0202 16:03:11.477540 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" path="/var/lib/kubelet/pods/4fb26728-ed2e-4205-b7f5-ca7a98b8c910/volumes" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.012081 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9szhh/crc-debug-8rgfb"] Feb 02 16:03:12 crc kubenswrapper[4869]: E0202 16:03:12.013221 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" containerName="container-00" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.013438 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" containerName="container-00" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.014103 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" containerName="container-00" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.015446 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.066248 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.066325 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.168154 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.168248 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.168358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.196320 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.339056 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.139886 4869 generic.go:334] "Generic (PLEG): container finished" podID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" containerID="ff2ba6291f48fd05032c2b7a4b4afad2ee04b00ae5888a83b68d20169b675016" exitCode=0 Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.140001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" event={"ID":"1c6d8b60-93c1-4b66-b0fb-bda7a3104357","Type":"ContainerDied","Data":"ff2ba6291f48fd05032c2b7a4b4afad2ee04b00ae5888a83b68d20169b675016"} Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.140234 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" event={"ID":"1c6d8b60-93c1-4b66-b0fb-bda7a3104357","Type":"ContainerStarted","Data":"a54b88a3f704e5e2d9ef6352bbab60b6335c86ce3541c781f3cb44c5119cbd9c"} Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.187809 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-8rgfb"] Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.198032 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-8rgfb"] Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.259291 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.313360 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") pod \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.313448 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") pod \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.313587 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host" (OuterVolumeSpecName: "host") pod "1c6d8b60-93c1-4b66-b0fb-bda7a3104357" (UID: "1c6d8b60-93c1-4b66-b0fb-bda7a3104357"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.314024 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.323151 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq" (OuterVolumeSpecName: "kube-api-access-g7vxq") pod "1c6d8b60-93c1-4b66-b0fb-bda7a3104357" (UID: "1c6d8b60-93c1-4b66-b0fb-bda7a3104357"). InnerVolumeSpecName "kube-api-access-g7vxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.415719 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:15 crc kubenswrapper[4869]: I0202 16:03:15.172645 4869 scope.go:117] "RemoveContainer" containerID="ff2ba6291f48fd05032c2b7a4b4afad2ee04b00ae5888a83b68d20169b675016" Feb 02 16:03:15 crc kubenswrapper[4869]: I0202 16:03:15.172674 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:15 crc kubenswrapper[4869]: I0202 16:03:15.475045 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" path="/var/lib/kubelet/pods/1c6d8b60-93c1-4b66-b0fb-bda7a3104357/volumes" Feb 02 16:03:45 crc kubenswrapper[4869]: I0202 16:03:45.304745 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:03:45 crc kubenswrapper[4869]: I0202 16:03:45.305396 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:03:46 crc kubenswrapper[4869]: I0202 16:03:46.745651 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77794c6b74-fhtds_bbb63205-2a5c-4177-8b7f-2a141324ba49/barbican-api/0.log" Feb 02 16:03:46 crc kubenswrapper[4869]: I0202 16:03:46.989885 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77794c6b74-fhtds_bbb63205-2a5c-4177-8b7f-2a141324ba49/barbican-api-log/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.001530 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d7f6679db-zbdxv_9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3/barbican-keystone-listener/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.178584 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-675f9657dc-6qw7m_18463ac0-a171-4ae0-9201-8df3d574eb70/barbican-worker/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.242973 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-675f9657dc-6qw7m_18463ac0-a171-4ae0-9201-8df3d574eb70/barbican-worker-log/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.252388 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d7f6679db-zbdxv_9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3/barbican-keystone-listener-log/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.467500 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2_5ca847f3-12e0-43a7-af47-6739dc10627d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.523591 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58069dba-f825-4ee3-972d-85d122369b28/ceilometer-central-agent/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.669075 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58069dba-f825-4ee3-972d-85d122369b28/proxy-httpd/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.673253 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58069dba-f825-4ee3-972d-85d122369b28/ceilometer-notification-agent/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.688664 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58069dba-f825-4ee3-972d-85d122369b28/sg-core/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.871486 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r_89ab19c1-9bd6-4f8b-b295-aee078ee4b0d/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.880324 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh_67cb4a99-39e2-4e00-88f5-748ad16cb874/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:48 crc kubenswrapper[4869]: I0202 16:03:48.579996 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ffb18e2a-67e6-4932-97fb-dd57b66f6c93/probe/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.145847 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d8f007a5-a428-44ff-8c6d-5de0d08beb7c/cinder-scheduler/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.189620 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1fbb1ee0-3403-49aa-9e5c-3926dd981751/cinder-api/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.308860 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1fbb1ee0-3403-49aa-9e5c-3926dd981751/cinder-api-log/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.484049 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d8f007a5-a428-44ff-8c6d-5de0d08beb7c/probe/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.702932 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37/probe/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.904592 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-txn47_19c443c4-baed-4a61-bc6d-bc8ba528e326/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:50 crc kubenswrapper[4869]: I0202 16:03:50.114577 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z97k7_c94bd387-2568-4bea-a5be-0ff99e224681/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:50 crc kubenswrapper[4869]: I0202 16:03:50.415961 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5kt5g_2d493264-07c6-4809-9a3e-809e60997896/init/0.log" Feb 02 16:03:50 crc kubenswrapper[4869]: I0202 16:03:50.581445 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5kt5g_2d493264-07c6-4809-9a3e-809e60997896/init/0.log" Feb 02 16:03:50 crc kubenswrapper[4869]: I0202 16:03:50.841711 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5kt5g_2d493264-07c6-4809-9a3e-809e60997896/dnsmasq-dns/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.064146 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6439a406-db54-421d-b5c7-5911b35cfda3/glance-log/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.082248 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6439a406-db54-421d-b5c7-5911b35cfda3/glance-httpd/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.333146 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e4f5a226-bdff-4182-971c-e3a22264a7d6/glance-httpd/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.592864 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e4f5a226-bdff-4182-971c-e3a22264a7d6/glance-log/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.788829 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6bc7747c5b-j78w2_8714c728-0089-451b-8335-ab32ef8c39ac/horizon/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.008908 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-zd67g_1cfd609a-5580-47a7-bb6d-afc564ca64d4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.218651 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rsvsc_04202cce-c3c1-483c-9d50-0fcf9a398094/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.290508 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6bc7747c5b-j78w2_8714c728-0089-451b-8335-ab32ef8c39ac/horizon-log/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.589196 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500741-9h6gs_d6019cb5-097c-4e32-b08f-dd117d4bcdf7/keystone-cron/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.788529 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500801-n7swm_35e8f12b-8b8b-4309-a57e-e46c357acc6d/keystone-cron/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.872658 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ffb18e2a-67e6-4932-97fb-dd57b66f6c93/cinder-backup/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.030183 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c78d1b99-1b30-416f-9afc-3dda8204e757/kube-state-metrics/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.296345 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9_83c45a4e-9fe0-4d8d-a74d-162a45a36d5e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.452013 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_68d3a7fe-1a89-4d45-9ffd-8057e313d3e9/manila-api-log/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.526763 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_68d3a7fe-1a89-4d45-9ffd-8057e313d3e9/manila-api/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.539253 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-575599577-dmndq_fc4c6770-5954-4777-8c4f-47397d045008/keystone-api/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.724645 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_52b1f1d7-270e-400d-b273-961b7142f38c/probe/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.796085 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_52b1f1d7-270e-400d-b273-961b7142f38c/manila-scheduler/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.816113 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_0df9e23b-1681-42de-b9d6-87c4c518d082/manila-share/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.918699 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_0df9e23b-1681-42de-b9d6-87c4c518d082/probe/0.log" Feb 02 16:03:54 crc kubenswrapper[4869]: I0202 16:03:54.458705 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bbd64cf97-7t5h5_1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca/neutron-httpd/0.log" Feb 02 16:03:54 crc kubenswrapper[4869]: I0202 16:03:54.622985 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bbd64cf97-7t5h5_1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca/neutron-api/0.log" Feb 02 16:03:54 crc kubenswrapper[4869]: I0202 16:03:54.646666 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g_cece8f41-7b97-43d1-b538-c09300006b15/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:55 crc kubenswrapper[4869]: I0202 16:03:55.356805 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_87abe16e-c4e3-4869-8f9e-6f9b46106c51/nova-cell0-conductor-conductor/0.log" Feb 02 16:03:55 crc kubenswrapper[4869]: I0202 16:03:55.649576 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6f2e77f7-6ccb-4992-8292-e69f277dc8f2/nova-api-log/0.log" Feb 02 16:03:55 crc kubenswrapper[4869]: I0202 16:03:55.892513 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7ed5d945-0024-455d-a2d4-c8724693b402/nova-cell1-conductor-conductor/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.202249 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_127a427f-66a5-4d07-ac48-aea0da95d425/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.205553 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6f2e77f7-6ccb-4992-8292-e69f277dc8f2/nova-api-api/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.390402 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk_196ff3ae-e676-4d40-9de4-ea6ad23a1e5e/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.487936 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0c133ea7-0c2e-4338-a24b-319409d4e41a/nova-metadata-log/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.924858 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_46796adc-7f57-405f-bb4c-a2ccb79153f2/nova-scheduler-scheduler/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.091640 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4287f1a9-b523-48a9-a999-fc8f34b212a4/mysql-bootstrap/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.262827 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4287f1a9-b523-48a9-a999-fc8f34b212a4/mysql-bootstrap/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.309121 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4287f1a9-b523-48a9-a999-fc8f34b212a4/galera/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.501551 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0db20771-eb71-4272-9814-ab5bf0fff1fe/mysql-bootstrap/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.725957 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0db20771-eb71-4272-9814-ab5bf0fff1fe/mysql-bootstrap/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.743889 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0db20771-eb71-4272-9814-ab5bf0fff1fe/galera/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.934504 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9c3c55b0-c9be-4635-9562-347406f90dff/openstackclient/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.219018 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-f7z74_d51425d7-d30c-466d-b478-17a637e3ef9f/ovn-controller/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.426246 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sr5dv_2b612893-5e70-472a-a65f-0d0c66f82de3/openstack-network-exporter/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.685931 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bd7dt_79eb9544-e5e9-455c-94ca-bb36fa6eb873/ovsdb-server-init/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.875943 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bd7dt_79eb9544-e5e9-455c-94ca-bb36fa6eb873/ovsdb-server-init/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.906118 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bd7dt_79eb9544-e5e9-455c-94ca-bb36fa6eb873/ovs-vswitchd/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.098810 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bd7dt_79eb9544-e5e9-455c-94ca-bb36fa6eb873/ovsdb-server/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.324317 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xjq2r_72dccf63-f84a-41bb-a601-d67db9557b64/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.391616 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0c133ea7-0c2e-4338-a24b-319409d4e41a/nova-metadata-metadata/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.562608 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f502e55d-56a7-4238-b2cc-46a4c2eb3945/openstack-network-exporter/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.624321 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f502e55d-56a7-4238-b2cc-46a4c2eb3945/ovn-northd/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.779028 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_208fe19b-f03b-4a68-b6f2-f9dc3783239e/openstack-network-exporter/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.805376 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_208fe19b-f03b-4a68-b6f2-f9dc3783239e/ovsdbserver-nb/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.826575 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37/cinder-volume/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.982364 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1078d20a-9d7e-45ef-8be5-bade239489c4/memcached/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.004484 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9a1c388-0473-4284-9a2c-09e3d97858f2/ovsdbserver-sb/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.006029 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9a1c388-0473-4284-9a2c-09e3d97858f2/openstack-network-exporter/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.159935 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-dc5588748-k6f99_ec674145-26a6-4ce9-9e00-083bccdad283/placement-api/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.226561 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebc9110-3186-4c3f-877b-44061d345584/setup-container/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.291636 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-dc5588748-k6f99_ec674145-26a6-4ce9-9e00-083bccdad283/placement-log/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.442758 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebc9110-3186-4c3f-877b-44061d345584/rabbitmq/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.451733 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebc9110-3186-4c3f-877b-44061d345584/setup-container/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.457636 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d228ac68-eb5f-494a-bf43-6cbca346ae24/setup-container/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.674774 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d228ac68-eb5f-494a-bf43-6cbca346ae24/setup-container/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.719037 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d228ac68-eb5f-494a-bf43-6cbca346ae24/rabbitmq/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.729699 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97_9ef6ee1c-f8bc-4060-8922-945b20187dfb/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.875599 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-d946d_09ba8528-6790-4df1-92c8-828f0ccd858e/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.925720 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lnnll_4b9e0145-82e1-4dde-a4d2-d17e482d01b7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.959415 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-v2kr2_3d624d16-2868-4154-a700-18e0cebe9357/ssh-known-hosts-edpm-deployment/0.log" Feb 02 16:04:01 crc kubenswrapper[4869]: I0202 16:04:01.184559 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d/test-operator-logs-container/0.log" Feb 02 16:04:01 crc kubenswrapper[4869]: I0202 16:04:01.326510 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-48vgr_34077009-4156-4523-9f51-24147190e39c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:04:01 crc kubenswrapper[4869]: I0202 16:04:01.594066 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_1ccbb21f-23d9-48be-a212-547e064326f6/tempest-tests-tempest-tests-runner/0.log" Feb 02 16:04:15 crc kubenswrapper[4869]: I0202 16:04:15.304120 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:04:15 crc kubenswrapper[4869]: I0202 16:04:15.305222 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:04:22 crc kubenswrapper[4869]: I0202 16:04:22.881454 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/util/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.054173 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/pull/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.071714 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/pull/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.095937 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/util/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.280054 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/extract/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.325584 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/pull/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.347203 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/util/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.537743 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-28mqn_f605f0c6-e023-433b-8e78-373b32387809/manager/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.689746 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-pbxmj_5ea40597-21e0-4548-ab09-e381dac894ef/manager/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.865880 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5d77f4dbc9-qmt77_f07dc950-121d-4a91-8489-dfc187196775/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.087685 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-9ph7x_53467de5-c9d7-4aa0-973d-180c8cb84b27/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.190830 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-cpjjt_ad8b0f9a-67d7-4897-af4b-f344b3d1c502/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.594587 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-87bd9d46f-762xj_77902d6e-ef76-42b0-a40c-0b51f383f580/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.752845 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-b4jxj_c0779518-9e33-43e3-b373-263d74fbbd0f/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.885293 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-m9czv_f27a3d01-fbc5-46d9-9c11-ef6c21ead605/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.040262 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7775d87d9d-l2b72_993dae41-359f-47f7-9a2a-38f7c97d49de/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.116256 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-hpnsb_3b0cf904-7af8-4e57-a664-7e594e557445/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.303700 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-85899c864d-4cnfc_fc6638c4-5467-48c9-b725-284cd08372f6/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.385352 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-swhqr_c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.546743 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-2chmz_98a25bb6-75b1-49ad-8d7c-cc4e763470ec/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.705072 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl_bd94e783-b3ec-4d7e-b669-98255f029da6/manager/0.log" Feb 02 16:04:26 crc kubenswrapper[4869]: I0202 16:04:26.068656 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5d75b9d66c-jsstz_61702985-b65f-4603-9960-3a455bf05c9e/operator/0.log" Feb 02 16:04:26 crc kubenswrapper[4869]: I0202 16:04:26.336137 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g2t6v_39ba26b8-85bb-43c8-80cb-c9523ba9cac7/registry-server/0.log" Feb 02 16:04:26 crc kubenswrapper[4869]: I0202 16:04:26.630661 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-28zx5_cf357940-5e8d-4111-86e6-1fafd5e670cd/manager/0.log" Feb 02 16:04:26 crc kubenswrapper[4869]: I0202 16:04:26.878014 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-6vnjh_ac2b0707-5906-40df-9457-06739f19df84/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.093084 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-djzsw_6719d674-1dac-4af1-859b-ea6a2186a20a/operator/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.243837 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7b89fdf75b-zdwh8_98a357a8-0e70-4f30-a41a-8dde25612a8a/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.513710 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-565849b54-fm2kj_7af79025-a32d-4e73-9559-5991093e986a/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.581531 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ntthk_06f5e083-c0ea-4ad0-9a07-50707d84be61/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.761731 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-9fsf5_2dfa14d3-9496-44cb-948b-e4065a9930c8/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.830034 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b89ddb58-h2kl2_7e9b35b2-f20d-4102-b541-63d2822c215d/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.981612 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58566f7c4b-mnxtb_32aa6b38-d480-426c-a36c-4cf34c082e73/manager/0.log" Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.303876 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.304410 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.304468 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.305313 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.305360 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89" gracePeriod=600 Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.029869 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-l692p_f89cdf2d-50e4-4089-8345-f11f7791826d/control-plane-machine-set-operator/0.log" Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.031056 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89" exitCode=0 Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.031090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89"} Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.031113 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79"} Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.031134 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.201846 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-whptb_0ade6e3e-6274-4469-af6f-39455fd721db/kube-rbac-proxy/0.log" Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.215524 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-whptb_0ade6e3e-6274-4469-af6f-39455fd721db/machine-api-operator/0.log" Feb 02 16:04:58 crc kubenswrapper[4869]: I0202 16:04:58.464232 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7j57w_d96c83c3-8f98-40c8-85f8-37cdf10eaeb7/cert-manager-controller/0.log" Feb 02 16:04:58 crc kubenswrapper[4869]: I0202 16:04:58.648394 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-498mc_92227558-4fbe-40b7-8a51-f9ba7043125a/cert-manager-cainjector/0.log" Feb 02 16:04:58 crc kubenswrapper[4869]: I0202 16:04:58.739979 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dfqjm_804bb5fc-4d8e-4f9f-892b-6d9af2943dbd/cert-manager-webhook/0.log" Feb 02 16:05:10 crc kubenswrapper[4869]: I0202 16:05:10.978499 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-sk72x_60ca7e15-9af2-4019-9481-39f8bc9e4ec7/nmstate-console-plugin/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.159298 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-87g86_3d92c75a-462e-4ff9-8373-8d91fb2624f4/nmstate-handler/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.224632 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-647lw_ec9ec105-2660-4787-89f3-5c0fe79e8e97/kube-rbac-proxy/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.299048 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-647lw_ec9ec105-2660-4787-89f3-5c0fe79e8e97/nmstate-metrics/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.363239 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-bbvzg_f417537d-ce1d-461c-afec-09d3ec96c3b4/nmstate-operator/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.476072 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jf287_bd339f13-8405-47aa-b76a-2cef40d3ec11/nmstate-webhook/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.416011 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-45hcg_fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188/kube-rbac-proxy/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.601008 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-45hcg_fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188/controller/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.671378 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-frr-files/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.850939 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-frr-files/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.869183 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-reloader/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.899053 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-reloader/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.926816 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.130805 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-frr-files/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.180928 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-reloader/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.185634 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.207236 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.333286 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-frr-files/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.371717 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-reloader/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.411178 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/controller/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.420578 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.572955 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/frr-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.660318 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/kube-rbac-proxy/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.715422 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/kube-rbac-proxy-frr/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.797331 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/reloader/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.940325 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-2v777_d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c/frr-k8s-webhook-server/0.log" Feb 02 16:05:39 crc kubenswrapper[4869]: I0202 16:05:39.177109 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b74bd8485-6rx7p_7a0708ec-3eb5-4515-adf0-e36c732da54e/manager/0.log" Feb 02 16:05:39 crc kubenswrapper[4869]: I0202 16:05:39.341284 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69b678c656-9prhr_322f75dd-f952-451d-b505-400b173b382c/webhook-server/0.log" Feb 02 16:05:39 crc kubenswrapper[4869]: I0202 16:05:39.489376 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qkkx4_131f6807-e412-436c-8271-86f09259ae74/kube-rbac-proxy/0.log" Feb 02 16:05:40 crc kubenswrapper[4869]: I0202 16:05:40.059508 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qkkx4_131f6807-e412-436c-8271-86f09259ae74/speaker/0.log" Feb 02 16:05:40 crc kubenswrapper[4869]: I0202 16:05:40.232648 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/frr/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.309859 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/util/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.459578 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/util/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.468869 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/pull/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.498491 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/pull/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.675006 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/pull/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.701317 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/util/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.709393 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/extract/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.860542 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/util/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.021865 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/util/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.035769 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/pull/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.036252 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/pull/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.223957 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/util/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.225367 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/pull/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.282472 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/extract/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.414663 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-utilities/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.612857 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-content/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.617631 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-utilities/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.672825 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-content/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.777709 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-content/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.785968 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-utilities/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.999477 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-utilities/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.206707 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-utilities/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.251271 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-content/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.294375 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-content/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.442839 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-utilities/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.474966 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-content/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.522990 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/registry-server/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.641239 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nbjts_ac6a4d49-eb04-4ee1-be26-63f67b0a092a/marketplace-operator/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.852773 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.109306 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.137990 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.159565 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.295145 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/registry-server/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.335543 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.336002 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.565344 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/registry-server/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.590140 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.733530 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.761058 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.800845 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.977573 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-utilities/0.log" Feb 02 16:05:56 crc kubenswrapper[4869]: I0202 16:05:56.020449 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-content/0.log" Feb 02 16:05:56 crc kubenswrapper[4869]: I0202 16:05:56.714212 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/registry-server/0.log" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.045389 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:32 crc kubenswrapper[4869]: E0202 16:06:32.046373 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" containerName="container-00" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.046390 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" containerName="container-00" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.046623 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" containerName="container-00" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.048267 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.060966 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.165251 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.165395 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.165425 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.267225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.267283 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.267759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.267870 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.268182 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.289180 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.400986 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.110719 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.974469 4869 generic.go:334] "Generic (PLEG): container finished" podID="77029322-bdbc-422f-8f29-8294fb8c1921" containerID="bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe" exitCode=0 Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.974755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerDied","Data":"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe"} Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.974786 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerStarted","Data":"7011d1d6eb35ac243cd911101dc03147167be19d4f9372fce27404d829dfb15d"} Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.978051 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 16:06:35 crc kubenswrapper[4869]: I0202 16:06:35.994510 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerStarted","Data":"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3"} Feb 02 16:06:37 crc kubenswrapper[4869]: I0202 16:06:37.004115 4869 generic.go:334] "Generic (PLEG): container finished" podID="77029322-bdbc-422f-8f29-8294fb8c1921" containerID="448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3" exitCode=0 Feb 02 16:06:37 crc kubenswrapper[4869]: I0202 16:06:37.004215 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerDied","Data":"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3"} Feb 02 16:06:39 crc kubenswrapper[4869]: I0202 16:06:39.034026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerStarted","Data":"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c"} Feb 02 16:06:39 crc kubenswrapper[4869]: I0202 16:06:39.056018 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rzx97" podStartSLOduration=3.271606434 podStartE2EDuration="7.055998134s" podCreationTimestamp="2026-02-02 16:06:32 +0000 UTC" firstStartedPulling="2026-02-02 16:06:33.977762483 +0000 UTC m=+5595.622399253" lastFinishedPulling="2026-02-02 16:06:37.762154193 +0000 UTC m=+5599.406790953" observedRunningTime="2026-02-02 16:06:39.051754521 +0000 UTC m=+5600.696391291" watchObservedRunningTime="2026-02-02 16:06:39.055998134 +0000 UTC m=+5600.700634904" Feb 02 16:06:42 crc kubenswrapper[4869]: I0202 16:06:42.401799 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:42 crc kubenswrapper[4869]: I0202 16:06:42.402227 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:42 crc kubenswrapper[4869]: I0202 16:06:42.472606 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:43 crc kubenswrapper[4869]: I0202 16:06:43.121063 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:43 crc kubenswrapper[4869]: I0202 16:06:43.172113 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.092653 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rzx97" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="registry-server" containerID="cri-o://9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" gracePeriod=2 Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.304283 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.304350 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.564431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.686125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") pod \"77029322-bdbc-422f-8f29-8294fb8c1921\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.686190 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") pod \"77029322-bdbc-422f-8f29-8294fb8c1921\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.686268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") pod \"77029322-bdbc-422f-8f29-8294fb8c1921\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.699022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities" (OuterVolumeSpecName: "utilities") pod "77029322-bdbc-422f-8f29-8294fb8c1921" (UID: "77029322-bdbc-422f-8f29-8294fb8c1921"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.708504 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd" (OuterVolumeSpecName: "kube-api-access-dm5wd") pod "77029322-bdbc-422f-8f29-8294fb8c1921" (UID: "77029322-bdbc-422f-8f29-8294fb8c1921"). InnerVolumeSpecName "kube-api-access-dm5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.759114 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77029322-bdbc-422f-8f29-8294fb8c1921" (UID: "77029322-bdbc-422f-8f29-8294fb8c1921"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.788258 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.788308 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.788320 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") on node \"crc\" DevicePath \"\"" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106184 4869 generic.go:334] "Generic (PLEG): container finished" podID="77029322-bdbc-422f-8f29-8294fb8c1921" containerID="9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" exitCode=0 Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerDied","Data":"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c"} Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerDied","Data":"7011d1d6eb35ac243cd911101dc03147167be19d4f9372fce27404d829dfb15d"} Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106270 4869 scope.go:117] "RemoveContainer" containerID="9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106390 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.132923 4869 scope.go:117] "RemoveContainer" containerID="448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.151439 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.157508 4869 scope.go:117] "RemoveContainer" containerID="bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.161056 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.200774 4869 scope.go:117] "RemoveContainer" containerID="9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" Feb 02 16:06:46 crc kubenswrapper[4869]: E0202 16:06:46.201427 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c\": container with ID starting with 9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c not found: ID does not exist" containerID="9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.201487 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c"} err="failed to get container status \"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c\": rpc error: code = NotFound desc = could not find container \"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c\": container with ID starting with 9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c not found: ID does not exist" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.201520 4869 scope.go:117] "RemoveContainer" containerID="448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3" Feb 02 16:06:46 crc kubenswrapper[4869]: E0202 16:06:46.202068 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3\": container with ID starting with 448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3 not found: ID does not exist" containerID="448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.202179 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3"} err="failed to get container status \"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3\": rpc error: code = NotFound desc = could not find container \"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3\": container with ID starting with 448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3 not found: ID does not exist" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.202266 4869 scope.go:117] "RemoveContainer" containerID="bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe" Feb 02 16:06:46 crc kubenswrapper[4869]: E0202 16:06:46.202749 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe\": container with ID starting with bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe not found: ID does not exist" containerID="bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.202784 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe"} err="failed to get container status \"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe\": rpc error: code = NotFound desc = could not find container \"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe\": container with ID starting with bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe not found: ID does not exist" Feb 02 16:06:47 crc kubenswrapper[4869]: I0202 16:06:47.472704 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" path="/var/lib/kubelet/pods/77029322-bdbc-422f-8f29-8294fb8c1921/volumes" Feb 02 16:07:15 crc kubenswrapper[4869]: I0202 16:07:15.304729 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:07:15 crc kubenswrapper[4869]: I0202 16:07:15.305510 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.304017 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.304670 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.304723 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.305891 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.305984 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" gracePeriod=600 Feb 02 16:07:45 crc kubenswrapper[4869]: E0202 16:07:45.429779 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.707308 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" exitCode=0 Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.707354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79"} Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.707411 4869 scope.go:117] "RemoveContainer" containerID="67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.708016 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:07:45 crc kubenswrapper[4869]: E0202 16:07:45.709413 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:07:59 crc kubenswrapper[4869]: I0202 16:07:59.473087 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:07:59 crc kubenswrapper[4869]: E0202 16:07:59.473795 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:08:04 crc kubenswrapper[4869]: I0202 16:08:04.898738 4869 generic.go:334] "Generic (PLEG): container finished" podID="56e87714-4847-4c2f-81a9-357123c1e872" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" exitCode=0 Feb 02 16:08:04 crc kubenswrapper[4869]: I0202 16:08:04.898801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/must-gather-wq69k" event={"ID":"56e87714-4847-4c2f-81a9-357123c1e872","Type":"ContainerDied","Data":"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2"} Feb 02 16:08:04 crc kubenswrapper[4869]: I0202 16:08:04.900190 4869 scope.go:117] "RemoveContainer" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" Feb 02 16:08:05 crc kubenswrapper[4869]: I0202 16:08:05.762256 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9szhh_must-gather-wq69k_56e87714-4847-4c2f-81a9-357123c1e872/gather/0.log" Feb 02 16:08:13 crc kubenswrapper[4869]: I0202 16:08:13.462950 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:08:13 crc kubenswrapper[4869]: E0202 16:08:13.463829 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.114449 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.114743 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9szhh/must-gather-wq69k" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="copy" containerID="cri-o://db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" gracePeriod=2 Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.127666 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.597335 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9szhh_must-gather-wq69k_56e87714-4847-4c2f-81a9-357123c1e872/copy/0.log" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.598739 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.732519 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") pod \"56e87714-4847-4c2f-81a9-357123c1e872\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.732596 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") pod \"56e87714-4847-4c2f-81a9-357123c1e872\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.753121 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s" (OuterVolumeSpecName: "kube-api-access-2pk5s") pod "56e87714-4847-4c2f-81a9-357123c1e872" (UID: "56e87714-4847-4c2f-81a9-357123c1e872"). InnerVolumeSpecName "kube-api-access-2pk5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.834766 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") on node \"crc\" DevicePath \"\"" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.903059 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "56e87714-4847-4c2f-81a9-357123c1e872" (UID: "56e87714-4847-4c2f-81a9-357123c1e872"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.936422 4869 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.995407 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9szhh_must-gather-wq69k_56e87714-4847-4c2f-81a9-357123c1e872/copy/0.log" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.995854 4869 generic.go:334] "Generic (PLEG): container finished" podID="56e87714-4847-4c2f-81a9-357123c1e872" containerID="db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" exitCode=143 Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.995960 4869 scope.go:117] "RemoveContainer" containerID="db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.996187 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.022882 4869 scope.go:117] "RemoveContainer" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.109505 4869 scope.go:117] "RemoveContainer" containerID="db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" Feb 02 16:08:15 crc kubenswrapper[4869]: E0202 16:08:15.110098 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1\": container with ID starting with db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1 not found: ID does not exist" containerID="db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.110161 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1"} err="failed to get container status \"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1\": rpc error: code = NotFound desc = could not find container \"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1\": container with ID starting with db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1 not found: ID does not exist" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.110198 4869 scope.go:117] "RemoveContainer" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" Feb 02 16:08:15 crc kubenswrapper[4869]: E0202 16:08:15.110522 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2\": container with ID starting with f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2 not found: ID does not exist" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.110548 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2"} err="failed to get container status \"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2\": rpc error: code = NotFound desc = could not find container \"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2\": container with ID starting with f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2 not found: ID does not exist" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.546048 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e87714-4847-4c2f-81a9-357123c1e872" path="/var/lib/kubelet/pods/56e87714-4847-4c2f-81a9-357123c1e872/volumes" Feb 02 16:08:25 crc kubenswrapper[4869]: I0202 16:08:25.463164 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:08:25 crc kubenswrapper[4869]: E0202 16:08:25.464206 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:08:32 crc kubenswrapper[4869]: I0202 16:08:32.518569 4869 scope.go:117] "RemoveContainer" containerID="d472ad4cfffb6ce34fcab232f456faf2bc5c139884bc19851d79c2adff55a49f" Feb 02 16:08:40 crc kubenswrapper[4869]: I0202 16:08:40.462327 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:08:40 crc kubenswrapper[4869]: E0202 16:08:40.464155 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:08:55 crc kubenswrapper[4869]: I0202 16:08:55.462881 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:08:55 crc kubenswrapper[4869]: E0202 16:08:55.463715 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:09:09 crc kubenswrapper[4869]: I0202 16:09:09.468671 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:09:09 crc kubenswrapper[4869]: E0202 16:09:09.469733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:09:21 crc kubenswrapper[4869]: I0202 16:09:21.462583 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:09:21 crc kubenswrapper[4869]: E0202 16:09:21.463483 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:09:32 crc kubenswrapper[4869]: I0202 16:09:32.462220 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:09:32 crc kubenswrapper[4869]: E0202 16:09:32.464789 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:09:32 crc kubenswrapper[4869]: I0202 16:09:32.582666 4869 scope.go:117] "RemoveContainer" containerID="77f7b5d294b60bfbbe355f8b5327d53d20b5718b7bb4f2b6f233a898b734eaf7" Feb 02 16:09:46 crc kubenswrapper[4869]: I0202 16:09:46.462866 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:09:46 crc kubenswrapper[4869]: E0202 16:09:46.464159 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:01 crc kubenswrapper[4869]: I0202 16:10:01.463361 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:01 crc kubenswrapper[4869]: E0202 16:10:01.464241 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:14 crc kubenswrapper[4869]: I0202 16:10:14.462954 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:14 crc kubenswrapper[4869]: E0202 16:10:14.464056 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:26 crc kubenswrapper[4869]: I0202 16:10:26.463443 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:26 crc kubenswrapper[4869]: E0202 16:10:26.465739 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.804090 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805128 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="registry-server" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805147 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="registry-server" Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805171 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="gather" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805180 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="gather" Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805201 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="extract-utilities" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805211 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="extract-utilities" Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805232 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="copy" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805283 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="copy" Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805306 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="extract-content" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="extract-content" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805809 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="registry-server" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805837 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="gather" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805853 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="copy" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.808080 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.822231 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.924103 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.924169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.924364 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.027570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.027642 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.027762 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.028279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.028398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.054221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.142671 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.653257 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:31 crc kubenswrapper[4869]: I0202 16:10:31.276077 4869 generic.go:334] "Generic (PLEG): container finished" podID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerID="232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673" exitCode=0 Feb 02 16:10:31 crc kubenswrapper[4869]: I0202 16:10:31.276191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerDied","Data":"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673"} Feb 02 16:10:31 crc kubenswrapper[4869]: I0202 16:10:31.276324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerStarted","Data":"52ae42d34a9f366250b3a49bfcf92a731d2e83c5ababadba7f489e0906888585"} Feb 02 16:10:33 crc kubenswrapper[4869]: I0202 16:10:33.293620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerStarted","Data":"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818"} Feb 02 16:10:36 crc kubenswrapper[4869]: I0202 16:10:36.326388 4869 generic.go:334] "Generic (PLEG): container finished" podID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerID="578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818" exitCode=0 Feb 02 16:10:36 crc kubenswrapper[4869]: I0202 16:10:36.326459 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerDied","Data":"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818"} Feb 02 16:10:37 crc kubenswrapper[4869]: I0202 16:10:37.344003 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerStarted","Data":"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae"} Feb 02 16:10:37 crc kubenswrapper[4869]: I0202 16:10:37.377115 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8qk85" podStartSLOduration=2.770197643 podStartE2EDuration="8.377085565s" podCreationTimestamp="2026-02-02 16:10:29 +0000 UTC" firstStartedPulling="2026-02-02 16:10:31.278120065 +0000 UTC m=+5832.922756835" lastFinishedPulling="2026-02-02 16:10:36.885007987 +0000 UTC m=+5838.529644757" observedRunningTime="2026-02-02 16:10:37.373200882 +0000 UTC m=+5839.017837662" watchObservedRunningTime="2026-02-02 16:10:37.377085565 +0000 UTC m=+5839.021722345" Feb 02 16:10:37 crc kubenswrapper[4869]: I0202 16:10:37.462983 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:37 crc kubenswrapper[4869]: E0202 16:10:37.463313 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:40 crc kubenswrapper[4869]: I0202 16:10:40.143727 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:40 crc kubenswrapper[4869]: I0202 16:10:40.144425 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:41 crc kubenswrapper[4869]: I0202 16:10:41.189602 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8qk85" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" probeResult="failure" output=< Feb 02 16:10:41 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 16:10:41 crc kubenswrapper[4869]: > Feb 02 16:10:50 crc kubenswrapper[4869]: I0202 16:10:50.212754 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:50 crc kubenswrapper[4869]: I0202 16:10:50.291498 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:50 crc kubenswrapper[4869]: I0202 16:10:50.457279 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:50 crc kubenswrapper[4869]: I0202 16:10:50.462674 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:50 crc kubenswrapper[4869]: E0202 16:10:50.462987 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:51 crc kubenswrapper[4869]: I0202 16:10:51.468287 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8qk85" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" containerID="cri-o://c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" gracePeriod=2 Feb 02 16:10:51 crc kubenswrapper[4869]: I0202 16:10:51.962729 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.113942 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") pod \"21cffe4b-d876-432a-9dd0-8e04c59313fa\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.114098 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") pod \"21cffe4b-d876-432a-9dd0-8e04c59313fa\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.114291 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") pod \"21cffe4b-d876-432a-9dd0-8e04c59313fa\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.115069 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities" (OuterVolumeSpecName: "utilities") pod "21cffe4b-d876-432a-9dd0-8e04c59313fa" (UID: "21cffe4b-d876-432a-9dd0-8e04c59313fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.121463 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv" (OuterVolumeSpecName: "kube-api-access-wrngv") pod "21cffe4b-d876-432a-9dd0-8e04c59313fa" (UID: "21cffe4b-d876-432a-9dd0-8e04c59313fa"). InnerVolumeSpecName "kube-api-access-wrngv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.217193 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") on node \"crc\" DevicePath \"\"" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.217228 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.234516 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21cffe4b-d876-432a-9dd0-8e04c59313fa" (UID: "21cffe4b-d876-432a-9dd0-8e04c59313fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.319449 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476780 4869 generic.go:334] "Generic (PLEG): container finished" podID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerID="c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" exitCode=0 Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerDied","Data":"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae"} Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476865 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerDied","Data":"52ae42d34a9f366250b3a49bfcf92a731d2e83c5ababadba7f489e0906888585"} Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476888 4869 scope.go:117] "RemoveContainer" containerID="c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476903 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.497973 4869 scope.go:117] "RemoveContainer" containerID="578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.524538 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.531259 4869 scope.go:117] "RemoveContainer" containerID="232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.539542 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.571387 4869 scope.go:117] "RemoveContainer" containerID="c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" Feb 02 16:10:52 crc kubenswrapper[4869]: E0202 16:10:52.571960 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae\": container with ID starting with c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae not found: ID does not exist" containerID="c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.572011 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae"} err="failed to get container status \"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae\": rpc error: code = NotFound desc = could not find container \"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae\": container with ID starting with c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae not found: ID does not exist" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.572042 4869 scope.go:117] "RemoveContainer" containerID="578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818" Feb 02 16:10:52 crc kubenswrapper[4869]: E0202 16:10:52.572536 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818\": container with ID starting with 578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818 not found: ID does not exist" containerID="578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.572597 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818"} err="failed to get container status \"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818\": rpc error: code = NotFound desc = could not find container \"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818\": container with ID starting with 578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818 not found: ID does not exist" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.572638 4869 scope.go:117] "RemoveContainer" containerID="232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673" Feb 02 16:10:52 crc kubenswrapper[4869]: E0202 16:10:52.572985 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673\": container with ID starting with 232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673 not found: ID does not exist" containerID="232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.573027 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673"} err="failed to get container status \"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673\": rpc error: code = NotFound desc = could not find container \"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673\": container with ID starting with 232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673 not found: ID does not exist" Feb 02 16:10:53 crc kubenswrapper[4869]: I0202 16:10:53.476844 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" path="/var/lib/kubelet/pods/21cffe4b-d876-432a-9dd0-8e04c59313fa/volumes" Feb 02 16:11:01 crc kubenswrapper[4869]: I0202 16:11:01.462809 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:01 crc kubenswrapper[4869]: E0202 16:11:01.463544 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:11:12 crc kubenswrapper[4869]: I0202 16:11:12.463291 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:12 crc kubenswrapper[4869]: E0202 16:11:12.464614 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:11:27 crc kubenswrapper[4869]: I0202 16:11:27.463668 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:27 crc kubenswrapper[4869]: E0202 16:11:27.466686 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:11:41 crc kubenswrapper[4869]: I0202 16:11:41.462588 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:41 crc kubenswrapper[4869]: E0202 16:11:41.467331 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:11:53 crc kubenswrapper[4869]: I0202 16:11:53.463717 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:53 crc kubenswrapper[4869]: E0202 16:11:53.464558 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:05 crc kubenswrapper[4869]: I0202 16:12:05.462560 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:05 crc kubenswrapper[4869]: E0202 16:12:05.463377 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.248244 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:11 crc kubenswrapper[4869]: E0202 16:12:11.249165 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="extract-utilities" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.249179 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="extract-utilities" Feb 02 16:12:11 crc kubenswrapper[4869]: E0202 16:12:11.249214 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.249220 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" Feb 02 16:12:11 crc kubenswrapper[4869]: E0202 16:12:11.249235 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="extract-content" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.249241 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="extract-content" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.249431 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.250749 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.261254 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.351376 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.351427 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.351571 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.453211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.453262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.453320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.454274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.454319 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.486990 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.570115 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:12 crc kubenswrapper[4869]: I0202 16:12:12.103439 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:12 crc kubenswrapper[4869]: I0202 16:12:12.206869 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerStarted","Data":"787a9782d45630680398671ceee03bba74f3c66b11586d0b0ab523efcf431b8c"} Feb 02 16:12:13 crc kubenswrapper[4869]: I0202 16:12:13.226207 4869 generic.go:334] "Generic (PLEG): container finished" podID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" containerID="a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694" exitCode=0 Feb 02 16:12:13 crc kubenswrapper[4869]: I0202 16:12:13.226522 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerDied","Data":"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694"} Feb 02 16:12:13 crc kubenswrapper[4869]: I0202 16:12:13.231828 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 16:12:14 crc kubenswrapper[4869]: I0202 16:12:14.236051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerStarted","Data":"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae"} Feb 02 16:12:15 crc kubenswrapper[4869]: I0202 16:12:15.245630 4869 generic.go:334] "Generic (PLEG): container finished" podID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" containerID="c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae" exitCode=0 Feb 02 16:12:15 crc kubenswrapper[4869]: I0202 16:12:15.245727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerDied","Data":"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae"} Feb 02 16:12:16 crc kubenswrapper[4869]: I0202 16:12:16.254859 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerStarted","Data":"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6"} Feb 02 16:12:16 crc kubenswrapper[4869]: I0202 16:12:16.279020 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4tcmf" podStartSLOduration=2.871680299 podStartE2EDuration="5.279003186s" podCreationTimestamp="2026-02-02 16:12:11 +0000 UTC" firstStartedPulling="2026-02-02 16:12:13.231421068 +0000 UTC m=+5934.876057838" lastFinishedPulling="2026-02-02 16:12:15.638743955 +0000 UTC m=+5937.283380725" observedRunningTime="2026-02-02 16:12:16.275084712 +0000 UTC m=+5937.919721482" watchObservedRunningTime="2026-02-02 16:12:16.279003186 +0000 UTC m=+5937.923639956" Feb 02 16:12:17 crc kubenswrapper[4869]: I0202 16:12:17.463136 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:17 crc kubenswrapper[4869]: E0202 16:12:17.463869 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:21 crc kubenswrapper[4869]: I0202 16:12:21.572307 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:21 crc kubenswrapper[4869]: I0202 16:12:21.572674 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:21 crc kubenswrapper[4869]: I0202 16:12:21.620528 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:22 crc kubenswrapper[4869]: I0202 16:12:22.361619 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:22 crc kubenswrapper[4869]: I0202 16:12:22.410961 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.321334 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4tcmf" podUID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" containerName="registry-server" containerID="cri-o://ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" gracePeriod=2 Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.808351 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.953082 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") pod \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.963326 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") pod \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.963385 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") pod \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.964804 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities" (OuterVolumeSpecName: "utilities") pod "836c110e-4a7e-4cb2-b896-3c8adc5bff81" (UID: "836c110e-4a7e-4cb2-b896-3c8adc5bff81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.969204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv" (OuterVolumeSpecName: "kube-api-access-rmbzv") pod "836c110e-4a7e-4cb2-b896-3c8adc5bff81" (UID: "836c110e-4a7e-4cb2-b896-3c8adc5bff81"). InnerVolumeSpecName "kube-api-access-rmbzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.064767 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.064795 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") on node \"crc\" DevicePath \"\"" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.271132 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "836c110e-4a7e-4cb2-b896-3c8adc5bff81" (UID: "836c110e-4a7e-4cb2-b896-3c8adc5bff81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.330816 4869 generic.go:334] "Generic (PLEG): container finished" podID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" containerID="ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" exitCode=0 Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.330876 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.331642 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerDied","Data":"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6"} Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.331771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerDied","Data":"787a9782d45630680398671ceee03bba74f3c66b11586d0b0ab523efcf431b8c"} Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.331852 4869 scope.go:117] "RemoveContainer" containerID="ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.358598 4869 scope.go:117] "RemoveContainer" containerID="c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.369812 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.381870 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.390106 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.396555 4869 scope.go:117] "RemoveContainer" containerID="a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.444444 4869 scope.go:117] "RemoveContainer" containerID="ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" Feb 02 16:12:25 crc kubenswrapper[4869]: E0202 16:12:25.444969 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6\": container with ID starting with ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6 not found: ID does not exist" containerID="ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445013 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6"} err="failed to get container status \"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6\": rpc error: code = NotFound desc = could not find container \"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6\": container with ID starting with ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6 not found: ID does not exist" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445040 4869 scope.go:117] "RemoveContainer" containerID="c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae" Feb 02 16:12:25 crc kubenswrapper[4869]: E0202 16:12:25.445457 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae\": container with ID starting with c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae not found: ID does not exist" containerID="c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445491 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae"} err="failed to get container status \"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae\": rpc error: code = NotFound desc = could not find container \"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae\": container with ID starting with c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae not found: ID does not exist" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445513 4869 scope.go:117] "RemoveContainer" containerID="a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694" Feb 02 16:12:25 crc kubenswrapper[4869]: E0202 16:12:25.445853 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694\": container with ID starting with a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694 not found: ID does not exist" containerID="a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445875 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694"} err="failed to get container status \"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694\": rpc error: code = NotFound desc = could not find container \"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694\": container with ID starting with a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694 not found: ID does not exist" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.476609 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" path="/var/lib/kubelet/pods/836c110e-4a7e-4cb2-b896-3c8adc5bff81/volumes" Feb 02 16:12:29 crc kubenswrapper[4869]: I0202 16:12:29.469483 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:29 crc kubenswrapper[4869]: E0202 16:12:29.470383 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:40 crc kubenswrapper[4869]: I0202 16:12:40.464432 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:40 crc kubenswrapper[4869]: E0202 16:12:40.465267 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:52 crc kubenswrapper[4869]: I0202 16:12:52.463504 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:53 crc kubenswrapper[4869]: I0202 16:12:53.582209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"20477e96901339ca056ebc58e8723c143f29eddd88b9c8140ac0e9687c1639e3"}